Test Report: QEMU_macOS 17297

                    
                      d70abdd8c088cadcf8720531a75f8262065eb1b0:2023-09-25:31157
                    
                

Test fail (91/255)

Order failed test Duration
3 TestDownloadOnly/v1.16.0/json-events 12.5
7 TestDownloadOnly/v1.16.0/kubectl 0
20 TestOffline 9.86
24 TestAddons/parallel/Registry 720.89
25 TestAddons/parallel/Ingress 0.77
27 TestAddons/parallel/MetricsServer 720.82
30 TestAddons/parallel/CSI 720.85
32 TestAddons/parallel/CloudSpanner 817.77
37 TestCertOptions 9.9
38 TestCertExpiration 195.18
39 TestDockerFlags 10.37
40 TestForceSystemdFlag 11.73
41 TestForceSystemdEnv 9.9
86 TestFunctional/parallel/ServiceCmdConnect 34.85
88 TestFunctional/parallel/PersistentVolumeClaim 240.97
153 TestImageBuild/serial/BuildWithBuildArg 1.09
162 TestIngressAddonLegacy/serial/ValidateIngressAddons 53.95
197 TestMountStart/serial/StartWithMountFirst 10.09
200 TestMultiNode/serial/FreshStart2Nodes 9.83
201 TestMultiNode/serial/DeployApp2Nodes 88.32
202 TestMultiNode/serial/PingHostFrom2Pods 0.08
203 TestMultiNode/serial/AddNode 0.07
204 TestMultiNode/serial/ProfileList 0.1
205 TestMultiNode/serial/CopyFile 0.06
206 TestMultiNode/serial/StopNode 0.13
207 TestMultiNode/serial/StartAfterStop 0.1
208 TestMultiNode/serial/RestartKeepsNodes 5.36
209 TestMultiNode/serial/DeleteNode 0.09
210 TestMultiNode/serial/StopMultiNode 0.14
211 TestMultiNode/serial/RestartMultiNode 5.25
212 TestMultiNode/serial/ValidateNameConflict 19.69
216 TestPreload 9.91
218 TestScheduledStopUnix 9.94
219 TestSkaffold 11.8
222 TestRunningBinaryUpgrade 126.3
224 TestKubernetesUpgrade 15.32
237 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 1.42
238 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.11
239 TestStoppedBinaryUpgrade/Setup 171.29
241 TestPause/serial/Start 9.85
251 TestNoKubernetes/serial/StartWithK8s 9.76
252 TestNoKubernetes/serial/StartWithStopK8s 5.31
253 TestNoKubernetes/serial/Start 5.3
257 TestNoKubernetes/serial/StartNoArgs 5.3
259 TestNetworkPlugins/group/auto/Start 9.72
260 TestNetworkPlugins/group/calico/Start 9.74
261 TestNetworkPlugins/group/custom-flannel/Start 9.77
262 TestNetworkPlugins/group/false/Start 9.69
263 TestNetworkPlugins/group/kindnet/Start 9.79
264 TestNetworkPlugins/group/flannel/Start 9.72
265 TestNetworkPlugins/group/enable-default-cni/Start 9.75
266 TestNetworkPlugins/group/bridge/Start 9.82
267 TestNetworkPlugins/group/kubenet/Start 9.79
268 TestStoppedBinaryUpgrade/Upgrade 2.25
269 TestStoppedBinaryUpgrade/MinikubeLogs 0.12
271 TestStartStop/group/old-k8s-version/serial/FirstStart 10.03
273 TestStartStop/group/no-preload/serial/FirstStart 12.05
274 TestStartStop/group/old-k8s-version/serial/DeployApp 0.09
275 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.11
278 TestStartStop/group/old-k8s-version/serial/SecondStart 7.03
279 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
280 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
281 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
282 TestStartStop/group/old-k8s-version/serial/Pause 0.09
284 TestStartStop/group/embed-certs/serial/FirstStart 11.56
285 TestStartStop/group/no-preload/serial/DeployApp 0.09
286 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.12
289 TestStartStop/group/no-preload/serial/SecondStart 7.17
290 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
291 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
292 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.07
293 TestStartStop/group/no-preload/serial/Pause 0.1
295 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 11.04
296 TestStartStop/group/embed-certs/serial/DeployApp 0.09
297 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.11
300 TestStartStop/group/embed-certs/serial/SecondStart 6.94
301 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
302 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
303 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.07
304 TestStartStop/group/embed-certs/serial/Pause 0.09
306 TestStartStop/group/newest-cni/serial/FirstStart 11.3
307 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.09
308 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.11
311 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 6.99
312 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
313 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
314 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
315 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.09
320 TestStartStop/group/newest-cni/serial/SecondStart 5.25
323 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.08
324 TestStartStop/group/newest-cni/serial/Pause 0.1
x
+
TestDownloadOnly/v1.16.0/json-events (12.5s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-427000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-427000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=qemu2 : exit status 40 (12.496605583s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"17130d8b-5f93-461f-a11a-a5002dd495d7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-427000] minikube v1.31.2 on Darwin 13.6 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"a4d55256-b2cb-4f52-99b8-ad6a8014b2eb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17297"}}
	{"specversion":"1.0","id":"cde46f20-2d21-4c7a-9a14-c1d39eaff4b7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/17297-1010/kubeconfig"}}
	{"specversion":"1.0","id":"06a8e66e-2101-4bb2-bc4e-0742df67f937","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"bb73b319-a7da-40a8-9815-78524c2ce23e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"61dee53c-3978-4a5a-9204-9ccd9340088e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/17297-1010/.minikube"}}
	{"specversion":"1.0","id":"5a6181e3-1a10-4baf-bb36-d7d9b2cce7a9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"195e1b39-9b73-42d9-9487-6119e4dbe41b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"cc2e410b-8f0f-4069-8aa3-8ca6528c800f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"71f812f9-37b1-409f-a88a-9a4aa141f496","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"93e53fd7-376f-45f5-9633-39f1870b02e4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node download-only-427000 in cluster download-only-427000","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"a913c1da-d252-4a6e-a0e7-82aceb03a8fe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.16.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"88ce9ffd-4257-45e2-8e7e-76212e57b068","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 Dst:/Users/jenkins/minikube-integration/17297-1010/.minikube/cache/darwin/arm64/v1.16.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x103ce5800 0x103ce5800 0x103ce5800 0x103ce5800 0x103ce5800 0x103ce5800 0x103ce5800] Decompressors:map[bz2:0x14000512dd0 gz:0x14000512dd8 tar:0x14000512d70 tar.bz2:0x14000512d90 tar.gz:0x14000512da0 tar.xz:0x14000512db0 tar.zst:0x14000512dc0 tbz2:0x14000512d90 tgz:0x140005
12da0 txz:0x14000512db0 tzst:0x14000512dc0 xz:0x14000512de0 zip:0x14000512df0 zst:0x14000512de8] Getters:map[file:0x14000062710 http:0x1400017e640 https:0x1400017e690] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"144a3776-d178-4e23-9338-1fd1fbd4b42c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 03:33:20.150406    1471 out.go:296] Setting OutFile to fd 1 ...
	I0925 03:33:20.150568    1471 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 03:33:20.150575    1471 out.go:309] Setting ErrFile to fd 2...
	I0925 03:33:20.150578    1471 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 03:33:20.150701    1471 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17297-1010/.minikube/bin
	W0925 03:33:20.150779    1471 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/17297-1010/.minikube/config/config.json: open /Users/jenkins/minikube-integration/17297-1010/.minikube/config/config.json: no such file or directory
	I0925 03:33:20.151871    1471 out.go:303] Setting JSON to true
	I0925 03:33:20.168318    1471 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":175,"bootTime":1695637825,"procs":397,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.6","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0925 03:33:20.168401    1471 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0925 03:33:20.175846    1471 out.go:97] [download-only-427000] minikube v1.31.2 on Darwin 13.6 (arm64)
	I0925 03:33:20.181840    1471 out.go:169] MINIKUBE_LOCATION=17297
	W0925 03:33:20.176014    1471 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/preloaded-tarball: no such file or directory
	I0925 03:33:20.176072    1471 notify.go:220] Checking for updates...
	I0925 03:33:20.192666    1471 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/17297-1010/kubeconfig
	I0925 03:33:20.196813    1471 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0925 03:33:20.199858    1471 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 03:33:20.201302    1471 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/17297-1010/.minikube
	W0925 03:33:20.207799    1471 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0925 03:33:20.207991    1471 driver.go:373] Setting default libvirt URI to qemu:///system
	I0925 03:33:20.212801    1471 out.go:97] Using the qemu2 driver based on user configuration
	I0925 03:33:20.212820    1471 start.go:298] selected driver: qemu2
	I0925 03:33:20.212834    1471 start.go:902] validating driver "qemu2" against <nil>
	I0925 03:33:20.212902    1471 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0925 03:33:20.216761    1471 out.go:169] Automatically selected the socket_vmnet network
	I0925 03:33:20.222426    1471 start_flags.go:384] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0925 03:33:20.222512    1471 start_flags.go:904] Wait components to verify : map[apiserver:true system_pods:true]
	I0925 03:33:20.222569    1471 cni.go:84] Creating CNI manager for ""
	I0925 03:33:20.222586    1471 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0925 03:33:20.222591    1471 start_flags.go:321] config:
	{Name:download-only-427000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-427000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRun
time:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0925 03:33:20.228273    1471 iso.go:125] acquiring lock: {Name:mkf881a60cf9fd1672567914305ff6f7a4f13809 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 03:33:20.232654    1471 out.go:97] Downloading VM boot image ...
	I0925 03:33:20.232673    1471 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso
	I0925 03:33:25.555005    1471 out.go:97] Starting control plane node download-only-427000 in cluster download-only-427000
	I0925 03:33:25.555031    1471 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0925 03:33:25.614606    1471 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0925 03:33:25.614644    1471 cache.go:57] Caching tarball of preloaded images
	I0925 03:33:25.614805    1471 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0925 03:33:25.619935    1471 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0925 03:33:25.619942    1471 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0925 03:33:25.697617    1471 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4?checksum=md5:a000baffb0664b293d602f95ed25caa6 -> /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0925 03:33:31.703423    1471 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0925 03:33:31.703562    1471 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0925 03:33:32.343425    1471 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0925 03:33:32.343613    1471 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/download-only-427000/config.json ...
	I0925 03:33:32.343634    1471 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/download-only-427000/config.json: {Name:mk73556e20767bba9803568dbbfd5b8f39da6dad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 03:33:32.343852    1471 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0925 03:33:32.344010    1471 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 -> /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/darwin/arm64/v1.16.0/kubectl
	I0925 03:33:32.582962    1471 out.go:169] 
	W0925 03:33:32.588065    1471 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 Dst:/Users/jenkins/minikube-integration/17297-1010/.minikube/cache/darwin/arm64/v1.16.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x103ce5800 0x103ce5800 0x103ce5800 0x103ce5800 0x103ce5800 0x103ce5800 0x103ce5800] Decompressors:map[bz2:0x14000512dd0 gz:0x14000512dd8 tar:0x14000512d70 tar.bz2:0x14000512d90 tar.gz:0x14000512da0 tar.xz:0x14000512db0 tar.zst:0x14000512dc0 tbz2:0x14000512d90 tgz:0x14000512da0 txz:0x14000512db0 tzst:0x14000512dc0 xz:0x14000512de0 zip:0x14000512df0 zst:0x14000512de8] Getters:map[file:0x14000062710 http:0x1400017e640 https:0x1400017e690] Dir:false ProgressListener
:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0925 03:33:32.588091    1471 out_reason.go:110] 
	W0925 03:33:32.593919    1471 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0925 03:33:32.598026    1471 out.go:169] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:71: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-427000" "--force" "--alsologtostderr" "--kubernetes-version=v1.16.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.16.0/json-events (12.50s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:160: expected the file for binary exist at "/Users/jenkins/minikube-integration/17297-1010/.minikube/cache/darwin/arm64/v1.16.0/kubectl" but got error stat /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/darwin/arm64/v1.16.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (9.86s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-464000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-464000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.726819458s)

                                                
                                                
-- stdout --
	* [offline-docker-464000] minikube v1.31.2 on Darwin 13.6 (arm64)
	  - MINIKUBE_LOCATION=17297
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17297-1010/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17297-1010/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node offline-docker-464000 in cluster offline-docker-464000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-464000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 04:20:55.048228    4685 out.go:296] Setting OutFile to fd 1 ...
	I0925 04:20:55.048384    4685 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 04:20:55.048387    4685 out.go:309] Setting ErrFile to fd 2...
	I0925 04:20:55.048390    4685 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 04:20:55.048512    4685 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17297-1010/.minikube/bin
	I0925 04:20:55.049861    4685 out.go:303] Setting JSON to false
	I0925 04:20:55.066777    4685 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3030,"bootTime":1695637825,"procs":412,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.6","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0925 04:20:55.066855    4685 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0925 04:20:55.070770    4685 out.go:177] * [offline-docker-464000] minikube v1.31.2 on Darwin 13.6 (arm64)
	I0925 04:20:55.078569    4685 notify.go:220] Checking for updates...
	I0925 04:20:55.078572    4685 out.go:177]   - MINIKUBE_LOCATION=17297
	I0925 04:20:55.081559    4685 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17297-1010/kubeconfig
	I0925 04:20:55.084621    4685 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0925 04:20:55.087579    4685 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 04:20:55.088965    4685 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17297-1010/.minikube
	I0925 04:20:55.091485    4685 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0925 04:20:55.094960    4685 config.go:182] Loaded profile config "multinode-352000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0925 04:20:55.095011    4685 driver.go:373] Setting default libvirt URI to qemu:///system
	I0925 04:20:55.098427    4685 out.go:177] * Using the qemu2 driver based on user configuration
	I0925 04:20:55.105514    4685 start.go:298] selected driver: qemu2
	I0925 04:20:55.105533    4685 start.go:902] validating driver "qemu2" against <nil>
	I0925 04:20:55.105566    4685 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0925 04:20:55.107610    4685 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0925 04:20:55.110588    4685 out.go:177] * Automatically selected the socket_vmnet network
	I0925 04:20:55.113653    4685 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0925 04:20:55.113676    4685 cni.go:84] Creating CNI manager for ""
	I0925 04:20:55.113683    4685 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0925 04:20:55.113686    4685 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0925 04:20:55.113692    4685 start_flags.go:321] config:
	{Name:offline-docker-464000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:offline-docker-464000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthS
ock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0925 04:20:55.118008    4685 iso.go:125] acquiring lock: {Name:mkf881a60cf9fd1672567914305ff6f7a4f13809 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 04:20:55.122590    4685 out.go:177] * Starting control plane node offline-docker-464000 in cluster offline-docker-464000
	I0925 04:20:55.130538    4685 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0925 04:20:55.130564    4685 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I0925 04:20:55.130575    4685 cache.go:57] Caching tarball of preloaded images
	I0925 04:20:55.130636    4685 preload.go:174] Found /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0925 04:20:55.130641    4685 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I0925 04:20:55.130703    4685 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/offline-docker-464000/config.json ...
	I0925 04:20:55.130714    4685 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/offline-docker-464000/config.json: {Name:mk192484b19bb652b997d2934c3aa9d265941ac1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 04:20:55.130896    4685 start.go:365] acquiring machines lock for offline-docker-464000: {Name:mk918d99818e2cf21e5912bc291ed18d4b442ba6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 04:20:55.130926    4685 start.go:369] acquired machines lock for "offline-docker-464000" in 21.625µs
	I0925 04:20:55.130935    4685 start.go:93] Provisioning new machine with config: &{Name:offline-docker-464000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.28.2 ClusterName:offline-docker-464000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 04:20:55.130971    4685 start.go:125] createHost starting for "" (driver="qemu2")
	I0925 04:20:55.138539    4685 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0925 04:20:55.152662    4685 start.go:159] libmachine.API.Create for "offline-docker-464000" (driver="qemu2")
	I0925 04:20:55.152689    4685 client.go:168] LocalClient.Create starting
	I0925 04:20:55.152754    4685 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/ca.pem
	I0925 04:20:55.152783    4685 main.go:141] libmachine: Decoding PEM data...
	I0925 04:20:55.152792    4685 main.go:141] libmachine: Parsing certificate...
	I0925 04:20:55.152835    4685 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/cert.pem
	I0925 04:20:55.152853    4685 main.go:141] libmachine: Decoding PEM data...
	I0925 04:20:55.152860    4685 main.go:141] libmachine: Parsing certificate...
	I0925 04:20:55.153191    4685 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17297-1010/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I0925 04:20:55.272770    4685 main.go:141] libmachine: Creating SSH key...
	I0925 04:20:55.327604    4685 main.go:141] libmachine: Creating Disk image...
	I0925 04:20:55.327614    4685 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0925 04:20:55.327772    4685 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/offline-docker-464000/disk.qcow2.raw /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/offline-docker-464000/disk.qcow2
	I0925 04:20:55.337050    4685 main.go:141] libmachine: STDOUT: 
	I0925 04:20:55.337072    4685 main.go:141] libmachine: STDERR: 
	I0925 04:20:55.337133    4685 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/offline-docker-464000/disk.qcow2 +20000M
	I0925 04:20:55.350714    4685 main.go:141] libmachine: STDOUT: Image resized.
	
	I0925 04:20:55.350729    4685 main.go:141] libmachine: STDERR: 
	I0925 04:20:55.350757    4685 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/offline-docker-464000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/offline-docker-464000/disk.qcow2
	I0925 04:20:55.350765    4685 main.go:141] libmachine: Starting QEMU VM...
	I0925 04:20:55.350796    4685 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/offline-docker-464000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/offline-docker-464000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/offline-docker-464000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:b7:8a:ae:5e:5b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/offline-docker-464000/disk.qcow2
	I0925 04:20:55.352390    4685 main.go:141] libmachine: STDOUT: 
	I0925 04:20:55.352404    4685 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 04:20:55.352424    4685 client.go:171] LocalClient.Create took 199.729208ms
	I0925 04:20:57.352537    4685 start.go:128] duration metric: createHost completed in 2.221555292s
	I0925 04:20:57.352571    4685 start.go:83] releasing machines lock for "offline-docker-464000", held for 2.221624375s
	W0925 04:20:57.352595    4685 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 04:20:57.363951    4685 out.go:177] * Deleting "offline-docker-464000" in qemu2 ...
	W0925 04:20:57.376040    4685 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 04:20:57.376052    4685 start.go:703] Will try again in 5 seconds ...
	I0925 04:21:02.378214    4685 start.go:365] acquiring machines lock for offline-docker-464000: {Name:mk918d99818e2cf21e5912bc291ed18d4b442ba6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 04:21:02.378626    4685 start.go:369] acquired machines lock for "offline-docker-464000" in 339.875µs
	I0925 04:21:02.378743    4685 start.go:93] Provisioning new machine with config: &{Name:offline-docker-464000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.28.2 ClusterName:offline-docker-464000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 04:21:02.378992    4685 start.go:125] createHost starting for "" (driver="qemu2")
	I0925 04:21:02.387693    4685 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0925 04:21:02.434182    4685 start.go:159] libmachine.API.Create for "offline-docker-464000" (driver="qemu2")
	I0925 04:21:02.434230    4685 client.go:168] LocalClient.Create starting
	I0925 04:21:02.434362    4685 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/ca.pem
	I0925 04:21:02.434420    4685 main.go:141] libmachine: Decoding PEM data...
	I0925 04:21:02.434442    4685 main.go:141] libmachine: Parsing certificate...
	I0925 04:21:02.434505    4685 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/cert.pem
	I0925 04:21:02.434541    4685 main.go:141] libmachine: Decoding PEM data...
	I0925 04:21:02.434553    4685 main.go:141] libmachine: Parsing certificate...
	I0925 04:21:02.434952    4685 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17297-1010/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I0925 04:21:02.562391    4685 main.go:141] libmachine: Creating SSH key...
	I0925 04:21:02.697179    4685 main.go:141] libmachine: Creating Disk image...
	I0925 04:21:02.697192    4685 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0925 04:21:02.697407    4685 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/offline-docker-464000/disk.qcow2.raw /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/offline-docker-464000/disk.qcow2
	I0925 04:21:02.706135    4685 main.go:141] libmachine: STDOUT: 
	I0925 04:21:02.706147    4685 main.go:141] libmachine: STDERR: 
	I0925 04:21:02.706197    4685 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/offline-docker-464000/disk.qcow2 +20000M
	I0925 04:21:02.713310    4685 main.go:141] libmachine: STDOUT: Image resized.
	
	I0925 04:21:02.713320    4685 main.go:141] libmachine: STDERR: 
	I0925 04:21:02.713334    4685 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/offline-docker-464000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/offline-docker-464000/disk.qcow2
	I0925 04:21:02.713340    4685 main.go:141] libmachine: Starting QEMU VM...
	I0925 04:21:02.713381    4685 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/offline-docker-464000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/offline-docker-464000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/offline-docker-464000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:44:55:4b:da:ed -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/offline-docker-464000/disk.qcow2
	I0925 04:21:02.714927    4685 main.go:141] libmachine: STDOUT: 
	I0925 04:21:02.714938    4685 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 04:21:02.714950    4685 client.go:171] LocalClient.Create took 280.712125ms
	I0925 04:21:04.717024    4685 start.go:128] duration metric: createHost completed in 2.338015s
	I0925 04:21:04.717054    4685 start.go:83] releasing machines lock for "offline-docker-464000", held for 2.338406583s
	W0925 04:21:04.717167    4685 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-464000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-464000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 04:21:04.724424    4685 out.go:177] 
	W0925 04:21:04.728308    4685 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0925 04:21:04.728314    4685 out.go:239] * 
	* 
	W0925 04:21:04.728805    4685 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0925 04:21:04.739346    4685 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-464000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:523: *** TestOffline FAILED at 2023-09-25 04:21:04.748877 -0700 PDT m=+2864.624654210
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-464000 -n offline-docker-464000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-464000 -n offline-docker-464000: exit status 7 (29.519167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-464000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-464000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-464000
--- FAIL: TestOffline (9.86s)

                                                
                                    
x
+
TestAddons/parallel/Registry (720.89s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:304: failed waiting for registry replicacontroller to stabilize: timed out waiting for the condition
addons_test.go:306: registry stabilized in 6m0.001624125s
addons_test.go:308: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
addons_test.go:308: ***** TestAddons/parallel/Registry: pod "actual-registry=true" failed to start within 6m0s: context deadline exceeded ****
addons_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p addons-183000 -n addons-183000
addons_test.go:308: TestAddons/parallel/Registry: showing logs for failed pods as of 2023-09-25 03:52:26.855049 -0700 PDT m=+1146.819310876
addons_test.go:309: failed waiting for pod actual-registry: actual-registry=true within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p addons-183000 -n addons-183000
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p addons-183000 logs -n 25
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-427000 | jenkins | v1.31.2 | 25 Sep 23 03:33 PDT |                     |
	|         | -p download-only-427000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| start   | -o=json --download-only        | download-only-427000 | jenkins | v1.31.2 | 25 Sep 23 03:33 PDT |                     |
	|         | -p download-only-427000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.31.2 | 25 Sep 23 03:33 PDT | 25 Sep 23 03:33 PDT |
	| delete  | -p download-only-427000        | download-only-427000 | jenkins | v1.31.2 | 25 Sep 23 03:33 PDT | 25 Sep 23 03:33 PDT |
	| delete  | -p download-only-427000        | download-only-427000 | jenkins | v1.31.2 | 25 Sep 23 03:33 PDT | 25 Sep 23 03:33 PDT |
	| start   | --download-only -p             | binary-mirror-317000 | jenkins | v1.31.2 | 25 Sep 23 03:33 PDT |                     |
	|         | binary-mirror-317000           |                      |         |         |                     |                     |
	|         | --alsologtostderr              |                      |         |         |                     |                     |
	|         | --binary-mirror                |                      |         |         |                     |                     |
	|         | http://127.0.0.1:49310         |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-317000        | binary-mirror-317000 | jenkins | v1.31.2 | 25 Sep 23 03:33 PDT | 25 Sep 23 03:33 PDT |
	| start   | -p addons-183000               | addons-183000        | jenkins | v1.31.2 | 25 Sep 23 03:33 PDT | 25 Sep 23 03:40 PDT |
	|         | --wait=true --memory=4000      |                      |         |         |                     |                     |
	|         | --alsologtostderr              |                      |         |         |                     |                     |
	|         | --addons=registry              |                      |         |         |                     |                     |
	|         | --addons=metrics-server        |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots       |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver   |                      |         |         |                     |                     |
	|         | --addons=gcp-auth              |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner         |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget      |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|         | --addons=ingress               |                      |         |         |                     |                     |
	|         | --addons=ingress-dns           |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p       | addons-183000        | jenkins | v1.31.2 | 25 Sep 23 03:52 PDT |                     |
	|         | addons-183000                  |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/25 03:33:43
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.21.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0925 03:33:43.113263    1555 out.go:296] Setting OutFile to fd 1 ...
	I0925 03:33:43.113390    1555 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 03:33:43.113393    1555 out.go:309] Setting ErrFile to fd 2...
	I0925 03:33:43.113395    1555 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 03:33:43.113522    1555 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17297-1010/.minikube/bin
	I0925 03:33:43.114539    1555 out.go:303] Setting JSON to false
	I0925 03:33:43.129689    1555 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":198,"bootTime":1695637825,"procs":391,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.6","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0925 03:33:43.129759    1555 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0925 03:33:43.134529    1555 out.go:177] * [addons-183000] minikube v1.31.2 on Darwin 13.6 (arm64)
	I0925 03:33:43.141636    1555 out.go:177]   - MINIKUBE_LOCATION=17297
	I0925 03:33:43.145595    1555 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17297-1010/kubeconfig
	I0925 03:33:43.141675    1555 notify.go:220] Checking for updates...
	I0925 03:33:43.149882    1555 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0925 03:33:43.152528    1555 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 03:33:43.155561    1555 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17297-1010/.minikube
	I0925 03:33:43.158461    1555 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0925 03:33:43.161685    1555 driver.go:373] Setting default libvirt URI to qemu:///system
	I0925 03:33:43.165518    1555 out.go:177] * Using the qemu2 driver based on user configuration
	I0925 03:33:43.170494    1555 start.go:298] selected driver: qemu2
	I0925 03:33:43.170500    1555 start.go:902] validating driver "qemu2" against <nil>
	I0925 03:33:43.170505    1555 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0925 03:33:43.172415    1555 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0925 03:33:43.175485    1555 out.go:177] * Automatically selected the socket_vmnet network
	I0925 03:33:43.178631    1555 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0925 03:33:43.178656    1555 cni.go:84] Creating CNI manager for ""
	I0925 03:33:43.178667    1555 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0925 03:33:43.178671    1555 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0925 03:33:43.178683    1555 start_flags.go:321] config:
	{Name:addons-183000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:addons-183000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID
:0 AutoPauseInterval:1m0s}
	I0925 03:33:43.182821    1555 iso.go:125] acquiring lock: {Name:mkf881a60cf9fd1672567914305ff6f7a4f13809 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 03:33:43.186491    1555 out.go:177] * Starting control plane node addons-183000 in cluster addons-183000
	I0925 03:33:43.194499    1555 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0925 03:33:43.194520    1555 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I0925 03:33:43.194535    1555 cache.go:57] Caching tarball of preloaded images
	I0925 03:33:43.194599    1555 preload.go:174] Found /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0925 03:33:43.194605    1555 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I0925 03:33:43.194819    1555 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/config.json ...
	I0925 03:33:43.194831    1555 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/config.json: {Name:mk49657fba0a0e3293097f9bbbd8574691cb2471 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 03:33:43.195036    1555 start.go:365] acquiring machines lock for addons-183000: {Name:mk918d99818e2cf21e5912bc291ed18d4b442ba6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 03:33:43.195158    1555 start.go:369] acquired machines lock for "addons-183000" in 116.458µs
	I0925 03:33:43.195167    1555 start.go:93] Provisioning new machine with config: &{Name:addons-183000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.2 ClusterName:addons-183000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 03:33:43.195202    1555 start.go:125] createHost starting for "" (driver="qemu2")
	I0925 03:33:43.203570    1555 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0925 03:33:43.526310    1555 start.go:159] libmachine.API.Create for "addons-183000" (driver="qemu2")
	I0925 03:33:43.526360    1555 client.go:168] LocalClient.Create starting
	I0925 03:33:43.526524    1555 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/ca.pem
	I0925 03:33:43.685162    1555 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/cert.pem
	I0925 03:33:43.725069    1555 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17297-1010/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I0925 03:33:44.270899    1555 main.go:141] libmachine: Creating SSH key...
	I0925 03:33:44.356373    1555 main.go:141] libmachine: Creating Disk image...
	I0925 03:33:44.356381    1555 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0925 03:33:44.356565    1555 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/disk.qcow2.raw /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/disk.qcow2
	I0925 03:33:44.389562    1555 main.go:141] libmachine: STDOUT: 
	I0925 03:33:44.389584    1555 main.go:141] libmachine: STDERR: 
	I0925 03:33:44.389658    1555 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/disk.qcow2 +20000M
	I0925 03:33:44.397120    1555 main.go:141] libmachine: STDOUT: Image resized.
	
	I0925 03:33:44.397139    1555 main.go:141] libmachine: STDERR: 
	I0925 03:33:44.397152    1555 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/disk.qcow2
	I0925 03:33:44.397157    1555 main.go:141] libmachine: Starting QEMU VM...
	I0925 03:33:44.397194    1555 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:70:b3:50:3d:bc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/disk.qcow2
	I0925 03:33:44.464471    1555 main.go:141] libmachine: STDOUT: 
	I0925 03:33:44.464499    1555 main.go:141] libmachine: STDERR: 
	I0925 03:33:44.464503    1555 main.go:141] libmachine: Attempt 0
	I0925 03:33:44.464522    1555 main.go:141] libmachine: Searching for 4e:70:b3:50:3d:bc in /var/db/dhcpd_leases ...
	I0925 03:33:46.465678    1555 main.go:141] libmachine: Attempt 1
	I0925 03:33:46.465761    1555 main.go:141] libmachine: Searching for 4e:70:b3:50:3d:bc in /var/db/dhcpd_leases ...
	I0925 03:33:48.467021    1555 main.go:141] libmachine: Attempt 2
	I0925 03:33:48.467061    1555 main.go:141] libmachine: Searching for 4e:70:b3:50:3d:bc in /var/db/dhcpd_leases ...
	I0925 03:33:50.468194    1555 main.go:141] libmachine: Attempt 3
	I0925 03:33:50.468212    1555 main.go:141] libmachine: Searching for 4e:70:b3:50:3d:bc in /var/db/dhcpd_leases ...
	I0925 03:33:52.469241    1555 main.go:141] libmachine: Attempt 4
	I0925 03:33:52.469258    1555 main.go:141] libmachine: Searching for 4e:70:b3:50:3d:bc in /var/db/dhcpd_leases ...
	I0925 03:33:54.470316    1555 main.go:141] libmachine: Attempt 5
	I0925 03:33:54.470352    1555 main.go:141] libmachine: Searching for 4e:70:b3:50:3d:bc in /var/db/dhcpd_leases ...
	I0925 03:33:56.471428    1555 main.go:141] libmachine: Attempt 6
	I0925 03:33:56.471461    1555 main.go:141] libmachine: Searching for 4e:70:b3:50:3d:bc in /var/db/dhcpd_leases ...
	I0925 03:33:56.471625    1555 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0925 03:33:56.471679    1555 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:4e:70:b3:50:3d:bc ID:1,4e:70:b3:50:3d:bc Lease:0x6512b393}
	I0925 03:33:56.471685    1555 main.go:141] libmachine: Found match: 4e:70:b3:50:3d:bc
	I0925 03:33:56.471705    1555 main.go:141] libmachine: IP: 192.168.105.2
	I0925 03:33:56.471714    1555 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.2)...
	I0925 03:33:57.476002    1555 machine.go:88] provisioning docker machine ...
	I0925 03:33:57.476029    1555 buildroot.go:166] provisioning hostname "addons-183000"
	I0925 03:33:57.476399    1555 main.go:141] libmachine: Using SSH client type: native
	I0925 03:33:57.476656    1555 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100de8760] 0x100deaed0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0925 03:33:57.476663    1555 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-183000 && echo "addons-183000" | sudo tee /etc/hostname
	I0925 03:33:57.549226    1555 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-183000
	
	I0925 03:33:57.549294    1555 main.go:141] libmachine: Using SSH client type: native
	I0925 03:33:57.549565    1555 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100de8760] 0x100deaed0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0925 03:33:57.549580    1555 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-183000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-183000/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-183000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0925 03:33:57.619664    1555 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0925 03:33:57.619678    1555 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17297-1010/.minikube CaCertPath:/Users/jenkins/minikube-integration/17297-1010/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17297-1010/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17297-1010/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17297-1010/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17297-1010/.minikube}
	I0925 03:33:57.619692    1555 buildroot.go:174] setting up certificates
	I0925 03:33:57.619698    1555 provision.go:83] configureAuth start
	I0925 03:33:57.619702    1555 provision.go:138] copyHostCerts
	I0925 03:33:57.619800    1555 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17297-1010/.minikube/key.pem (1679 bytes)
	I0925 03:33:57.620015    1555 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17297-1010/.minikube/ca.pem (1082 bytes)
	I0925 03:33:57.620106    1555 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17297-1010/.minikube/cert.pem (1123 bytes)
	I0925 03:33:57.620180    1555 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17297-1010/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17297-1010/.minikube/certs/ca-key.pem org=jenkins.addons-183000 san=[192.168.105.2 192.168.105.2 localhost 127.0.0.1 minikube addons-183000]
	I0925 03:33:57.680529    1555 provision.go:172] copyRemoteCerts
	I0925 03:33:57.680584    1555 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0925 03:33:57.680600    1555 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/id_rsa Username:docker}
	I0925 03:33:57.716693    1555 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0925 03:33:57.724070    1555 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0925 03:33:57.731348    1555 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0925 03:33:57.738044    1555 provision.go:86] duration metric: configureAuth took 118.340875ms
	I0925 03:33:57.738067    1555 buildroot.go:189] setting minikube options for container-runtime
	I0925 03:33:57.738181    1555 config.go:182] Loaded profile config "addons-183000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0925 03:33:57.738225    1555 main.go:141] libmachine: Using SSH client type: native
	I0925 03:33:57.738442    1555 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100de8760] 0x100deaed0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0925 03:33:57.738446    1555 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0925 03:33:57.806528    1555 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0925 03:33:57.806536    1555 buildroot.go:70] root file system type: tmpfs
	I0925 03:33:57.806591    1555 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0925 03:33:57.806639    1555 main.go:141] libmachine: Using SSH client type: native
	I0925 03:33:57.806901    1555 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100de8760] 0x100deaed0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0925 03:33:57.806939    1555 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0925 03:33:57.879305    1555 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0925 03:33:57.879349    1555 main.go:141] libmachine: Using SSH client type: native
	I0925 03:33:57.879600    1555 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100de8760] 0x100deaed0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0925 03:33:57.879612    1555 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0925 03:33:58.218156    1555 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0925 03:33:58.218176    1555 machine.go:91] provisioned docker machine in 742.178459ms
	I0925 03:33:58.218184    1555 client.go:171] LocalClient.Create took 14.692090292s
	I0925 03:33:58.218196    1555 start.go:167] duration metric: libmachine.API.Create for "addons-183000" took 14.692162542s
	I0925 03:33:58.218201    1555 start.go:300] post-start starting for "addons-183000" (driver="qemu2")
	I0925 03:33:58.218213    1555 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0925 03:33:58.218288    1555 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0925 03:33:58.218298    1555 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/id_rsa Username:docker}
	I0925 03:33:58.255037    1555 ssh_runner.go:195] Run: cat /etc/os-release
	I0925 03:33:58.256454    1555 info.go:137] Remote host: Buildroot 2021.02.12
	I0925 03:33:58.256461    1555 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17297-1010/.minikube/addons for local assets ...
	I0925 03:33:58.256533    1555 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17297-1010/.minikube/files for local assets ...
	I0925 03:33:58.256562    1555 start.go:303] post-start completed in 38.354459ms
	I0925 03:33:58.256920    1555 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/config.json ...
	I0925 03:33:58.257077    1555 start.go:128] duration metric: createHost completed in 15.062148875s
	I0925 03:33:58.257104    1555 main.go:141] libmachine: Using SSH client type: native
	I0925 03:33:58.257337    1555 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100de8760] 0x100deaed0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0925 03:33:58.257341    1555 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0925 03:33:58.325173    1555 main.go:141] libmachine: SSH cmd err, output: <nil>: 1695638038.462407626
	
	I0925 03:33:58.325184    1555 fix.go:206] guest clock: 1695638038.462407626
	I0925 03:33:58.325188    1555 fix.go:219] Guest: 2023-09-25 03:33:58.462407626 -0700 PDT Remote: 2023-09-25 03:33:58.257082 -0700 PDT m=+15.162425626 (delta=205.325626ms)
	I0925 03:33:58.325199    1555 fix.go:190] guest clock delta is within tolerance: 205.325626ms
	I0925 03:33:58.325201    1555 start.go:83] releasing machines lock for "addons-183000", held for 15.130317917s
	I0925 03:33:58.325486    1555 ssh_runner.go:195] Run: cat /version.json
	I0925 03:33:58.325494    1555 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/id_rsa Username:docker}
	I0925 03:33:58.325516    1555 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0925 03:33:58.325555    1555 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/id_rsa Username:docker}
	I0925 03:33:58.361340    1555 ssh_runner.go:195] Run: systemctl --version
	I0925 03:33:58.402839    1555 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0925 03:33:58.404630    1555 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0925 03:33:58.404664    1555 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0925 03:33:58.409389    1555 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0925 03:33:58.409398    1555 start.go:469] detecting cgroup driver to use...
	I0925 03:33:58.409504    1555 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0925 03:33:58.414731    1555 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0925 03:33:58.417759    1555 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0925 03:33:58.420882    1555 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0925 03:33:58.420905    1555 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0925 03:33:58.424376    1555 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0925 03:33:58.427971    1555 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0925 03:33:58.431438    1555 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0925 03:33:58.434555    1555 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0925 03:33:58.437481    1555 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0925 03:33:58.440650    1555 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0925 03:33:58.444117    1555 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0925 03:33:58.446963    1555 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0925 03:33:58.506828    1555 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0925 03:33:58.515326    1555 start.go:469] detecting cgroup driver to use...
	I0925 03:33:58.515396    1555 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0925 03:33:58.520350    1555 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0925 03:33:58.525290    1555 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0925 03:33:58.532641    1555 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0925 03:33:58.537661    1555 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0925 03:33:58.542291    1555 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0925 03:33:58.583433    1555 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0925 03:33:58.588627    1555 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0925 03:33:58.594011    1555 ssh_runner.go:195] Run: which cri-dockerd
	I0925 03:33:58.595317    1555 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0925 03:33:58.597772    1555 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0925 03:33:58.602614    1555 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0925 03:33:58.687592    1555 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0925 03:33:58.763371    1555 docker.go:554] configuring docker to use "cgroupfs" as cgroup driver...
	I0925 03:33:58.763431    1555 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0925 03:33:58.768807    1555 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0925 03:33:58.850856    1555 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0925 03:34:00.021109    1555 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.170257708s)
	I0925 03:34:00.021184    1555 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0925 03:34:00.102397    1555 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0925 03:34:00.182389    1555 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0925 03:34:00.242288    1555 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0925 03:34:00.310048    1555 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0925 03:34:00.320927    1555 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0925 03:34:00.397773    1555 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0925 03:34:00.421934    1555 start.go:516] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0925 03:34:00.422022    1555 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0925 03:34:00.424107    1555 start.go:537] Will wait 60s for crictl version
	I0925 03:34:00.424134    1555 ssh_runner.go:195] Run: which crictl
	I0925 03:34:00.425400    1555 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0925 03:34:00.448268    1555 start.go:553] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.6
	RuntimeApiVersion:  v1
	I0925 03:34:00.448328    1555 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0925 03:34:00.458640    1555 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0925 03:34:00.474285    1555 out.go:204] * Preparing Kubernetes v1.28.2 on Docker 24.0.6 ...
	I0925 03:34:00.474362    1555 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0925 03:34:00.475766    1555 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0925 03:34:00.479918    1555 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0925 03:34:00.479959    1555 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0925 03:34:00.485137    1555 docker.go:664] Got preloaded images: 
	I0925 03:34:00.485144    1555 docker.go:670] registry.k8s.io/kube-apiserver:v1.28.2 wasn't preloaded
	I0925 03:34:00.485184    1555 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0925 03:34:00.488328    1555 ssh_runner.go:195] Run: which lz4
	I0925 03:34:00.489753    1555 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0925 03:34:00.490946    1555 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0925 03:34:00.490958    1555 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (356993689 bytes)
	I0925 03:34:01.821604    1555 docker.go:628] Took 1.331913 seconds to copy over tarball
	I0925 03:34:01.821663    1555 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0925 03:34:02.850635    1555 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.028977875s)
	I0925 03:34:02.850646    1555 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0925 03:34:02.866214    1555 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0925 03:34:02.869216    1555 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0925 03:34:02.874196    1555 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0925 03:34:02.955148    1555 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0925 03:34:05.167252    1555 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.212127209s)
	I0925 03:34:05.167356    1555 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0925 03:34:05.173293    1555 docker.go:664] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.2
	registry.k8s.io/kube-scheduler:v1.28.2
	registry.k8s.io/kube-proxy:v1.28.2
	registry.k8s.io/kube-controller-manager:v1.28.2
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0925 03:34:05.173304    1555 cache_images.go:84] Images are preloaded, skipping loading
	I0925 03:34:05.173372    1555 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0925 03:34:05.180961    1555 cni.go:84] Creating CNI manager for ""
	I0925 03:34:05.180975    1555 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0925 03:34:05.180995    1555 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0925 03:34:05.181006    1555 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.2 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-183000 NodeName:addons-183000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0925 03:34:05.181071    1555 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-183000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0925 03:34:05.181111    1555 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=addons-183000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:addons-183000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0925 03:34:05.181162    1555 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I0925 03:34:05.184441    1555 binaries.go:44] Found k8s binaries, skipping transfer
	I0925 03:34:05.184477    1555 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0925 03:34:05.187654    1555 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I0925 03:34:05.192980    1555 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0925 03:34:05.197983    1555 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2099 bytes)
	I0925 03:34:05.202799    1555 ssh_runner.go:195] Run: grep 192.168.105.2	control-plane.minikube.internal$ /etc/hosts
	I0925 03:34:05.204148    1555 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0925 03:34:05.208295    1555 certs.go:56] Setting up /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000 for IP: 192.168.105.2
	I0925 03:34:05.208303    1555 certs.go:190] acquiring lock for shared ca certs: {Name:mk095b03680bcdeba6c321a9f458c9fbafa67639 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 03:34:05.208463    1555 certs.go:204] generating minikubeCA CA: /Users/jenkins/minikube-integration/17297-1010/.minikube/ca.key
	I0925 03:34:05.279404    1555 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17297-1010/.minikube/ca.crt ...
	I0925 03:34:05.279413    1555 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17297-1010/.minikube/ca.crt: {Name:mk70f9fc8ba800117a8a8b4d751d3a98c619cb54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 03:34:05.279591    1555 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17297-1010/.minikube/ca.key ...
	I0925 03:34:05.279595    1555 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17297-1010/.minikube/ca.key: {Name:mkd44aa01a2f3e5b978643c9a3feb1028c2bb791 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 03:34:05.279712    1555 certs.go:204] generating proxyClientCA CA: /Users/jenkins/minikube-integration/17297-1010/.minikube/proxy-client-ca.key
	I0925 03:34:05.342350    1555 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17297-1010/.minikube/proxy-client-ca.crt ...
	I0925 03:34:05.342356    1555 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17297-1010/.minikube/proxy-client-ca.crt: {Name:mkc0af119bea050a868312bfe8f89d742604990c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 03:34:05.342558    1555 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17297-1010/.minikube/proxy-client-ca.key ...
	I0925 03:34:05.342563    1555 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17297-1010/.minikube/proxy-client-ca.key: {Name:mka9b8c6393173e2358c8b84eb9bff6ea6851f33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 03:34:05.342694    1555 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/client.key
	I0925 03:34:05.342700    1555 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/client.crt with IP's: []
	I0925 03:34:05.380999    1555 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/client.crt ...
	I0925 03:34:05.381013    1555 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/client.crt: {Name:mkec4b98dbbfb657baac4f5fae18fe43bd8b5970 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 03:34:05.381125    1555 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/client.key ...
	I0925 03:34:05.381130    1555 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/client.key: {Name:mk8be81ea1673fa1894559e8faa2fa2323674614 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 03:34:05.381227    1555 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/apiserver.key.96055969
	I0925 03:34:05.381235    1555 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/apiserver.crt.96055969 with IP's: [192.168.105.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0925 03:34:05.441721    1555 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/apiserver.crt.96055969 ...
	I0925 03:34:05.441725    1555 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/apiserver.crt.96055969: {Name:mkba38dc1a56241112b86d1503bca4f2588c1bf7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 03:34:05.441849    1555 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/apiserver.key.96055969 ...
	I0925 03:34:05.441852    1555 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/apiserver.key.96055969: {Name:mk41423e9550dcb3371da4467db52078d1bb4d78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 03:34:05.441956    1555 certs.go:337] copying /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/apiserver.crt.96055969 -> /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/apiserver.crt
	I0925 03:34:05.442053    1555 certs.go:341] copying /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/apiserver.key.96055969 -> /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/apiserver.key
	I0925 03:34:05.442146    1555 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/proxy-client.key
	I0925 03:34:05.442154    1555 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/proxy-client.crt with IP's: []
	I0925 03:34:05.578079    1555 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/proxy-client.crt ...
	I0925 03:34:05.578082    1555 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/proxy-client.crt: {Name:mkbd132fd7a0f2cb28d572f95bd43c9a1ef215f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 03:34:05.578216    1555 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/proxy-client.key ...
	I0925 03:34:05.578218    1555 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/proxy-client.key: {Name:mkf93f480df65e887c0e782806fe1d821d05370d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 03:34:05.578436    1555 certs.go:437] found cert: /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/Users/jenkins/minikube-integration/17297-1010/.minikube/certs/ca-key.pem (1675 bytes)
	I0925 03:34:05.578458    1555 certs.go:437] found cert: /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/Users/jenkins/minikube-integration/17297-1010/.minikube/certs/ca.pem (1082 bytes)
	I0925 03:34:05.578479    1555 certs.go:437] found cert: /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/Users/jenkins/minikube-integration/17297-1010/.minikube/certs/cert.pem (1123 bytes)
	I0925 03:34:05.578499    1555 certs.go:437] found cert: /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/Users/jenkins/minikube-integration/17297-1010/.minikube/certs/key.pem (1679 bytes)
	I0925 03:34:05.578876    1555 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0925 03:34:05.587435    1555 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0925 03:34:05.594545    1555 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0925 03:34:05.601433    1555 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0925 03:34:05.608504    1555 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1010/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0925 03:34:05.616247    1555 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1010/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0925 03:34:05.623555    1555 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1010/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0925 03:34:05.630877    1555 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1010/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0925 03:34:05.637827    1555 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1010/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0925 03:34:05.644421    1555 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0925 03:34:05.650432    1555 ssh_runner.go:195] Run: openssl version
	I0925 03:34:05.652383    1555 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0925 03:34:05.655860    1555 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0925 03:34:05.657450    1555 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 25 10:34 /usr/share/ca-certificates/minikubeCA.pem
	I0925 03:34:05.657472    1555 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0925 03:34:05.659354    1555 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0925 03:34:05.662355    1555 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0925 03:34:05.663775    1555 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0925 03:34:05.663811    1555 kubeadm.go:404] StartCluster: {Name:addons-183000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.2 ClusterName:addons-183000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0925 03:34:05.663875    1555 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0925 03:34:05.669363    1555 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0925 03:34:05.672641    1555 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0925 03:34:05.675788    1555 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0925 03:34:05.678955    1555 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0925 03:34:05.678977    1555 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0925 03:34:05.700129    1555 kubeadm.go:322] [init] Using Kubernetes version: v1.28.2
	I0925 03:34:05.700165    1555 kubeadm.go:322] [preflight] Running pre-flight checks
	I0925 03:34:05.762507    1555 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0925 03:34:05.762580    1555 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0925 03:34:05.762631    1555 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0925 03:34:05.856523    1555 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0925 03:34:05.862696    1555 out.go:204]   - Generating certificates and keys ...
	I0925 03:34:05.862744    1555 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0925 03:34:05.862781    1555 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0925 03:34:05.954799    1555 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0925 03:34:06.088347    1555 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0925 03:34:06.179074    1555 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0925 03:34:06.367263    1555 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0925 03:34:06.441263    1555 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0925 03:34:06.441326    1555 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-183000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0925 03:34:06.679555    1555 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0925 03:34:06.679622    1555 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-183000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0925 03:34:06.780717    1555 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0925 03:34:06.934557    1555 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0925 03:34:07.004571    1555 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0925 03:34:07.004599    1555 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0925 03:34:07.096444    1555 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0925 03:34:07.197087    1555 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0925 03:34:07.295019    1555 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0925 03:34:07.459088    1555 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0925 03:34:07.459841    1555 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0925 03:34:07.461016    1555 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0925 03:34:07.464311    1555 out.go:204]   - Booting up control plane ...
	I0925 03:34:07.464429    1555 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0925 03:34:07.464523    1555 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0925 03:34:07.464562    1555 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0925 03:34:07.468573    1555 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0925 03:34:07.468914    1555 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0925 03:34:07.468980    1555 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0925 03:34:07.551081    1555 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0925 03:34:11.552205    1555 kubeadm.go:322] [apiclient] All control plane components are healthy after 4.001307 seconds
	I0925 03:34:11.552277    1555 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0925 03:34:11.558090    1555 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0925 03:34:12.066492    1555 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0925 03:34:12.066604    1555 kubeadm.go:322] [mark-control-plane] Marking the node addons-183000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0925 03:34:12.571455    1555 kubeadm.go:322] [bootstrap-token] Using token: dcud0i.8u8422zl7jahtpxe
	I0925 03:34:12.577836    1555 out.go:204]   - Configuring RBAC rules ...
	I0925 03:34:12.577916    1555 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0925 03:34:12.580042    1555 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0925 03:34:12.583046    1555 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0925 03:34:12.584193    1555 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0925 03:34:12.585457    1555 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0925 03:34:12.586636    1555 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0925 03:34:12.592832    1555 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0925 03:34:12.757427    1555 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0925 03:34:12.982058    1555 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0925 03:34:12.982629    1555 kubeadm.go:322] 
	I0925 03:34:12.982664    1555 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0925 03:34:12.982667    1555 kubeadm.go:322] 
	I0925 03:34:12.982715    1555 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0925 03:34:12.982721    1555 kubeadm.go:322] 
	I0925 03:34:12.982735    1555 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0925 03:34:12.982762    1555 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0925 03:34:12.982824    1555 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0925 03:34:12.982828    1555 kubeadm.go:322] 
	I0925 03:34:12.982852    1555 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0925 03:34:12.982856    1555 kubeadm.go:322] 
	I0925 03:34:12.982895    1555 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0925 03:34:12.982898    1555 kubeadm.go:322] 
	I0925 03:34:12.982927    1555 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0925 03:34:12.982998    1555 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0925 03:34:12.983041    1555 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0925 03:34:12.983046    1555 kubeadm.go:322] 
	I0925 03:34:12.983087    1555 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0925 03:34:12.983123    1555 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0925 03:34:12.983125    1555 kubeadm.go:322] 
	I0925 03:34:12.983172    1555 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token dcud0i.8u8422zl7jahtpxe \
	I0925 03:34:12.983225    1555 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:3fc5fb926713648f8638ba10da0d4f45584d32929bcc07af5ada491c000ad47e \
	I0925 03:34:12.983240    1555 kubeadm.go:322] 	--control-plane 
	I0925 03:34:12.983242    1555 kubeadm.go:322] 
	I0925 03:34:12.983281    1555 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0925 03:34:12.983285    1555 kubeadm.go:322] 
	I0925 03:34:12.983328    1555 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token dcud0i.8u8422zl7jahtpxe \
	I0925 03:34:12.983387    1555 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:3fc5fb926713648f8638ba10da0d4f45584d32929bcc07af5ada491c000ad47e 
	I0925 03:34:12.983463    1555 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0925 03:34:12.983472    1555 cni.go:84] Creating CNI manager for ""
	I0925 03:34:12.983479    1555 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0925 03:34:12.992098    1555 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0925 03:34:12.995235    1555 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0925 03:34:12.999700    1555 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0925 03:34:13.004656    1555 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0925 03:34:13.004755    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=1bf6c3d5317028f348e55ea19d261973a6487d3c minikube.k8s.io/name=addons-183000 minikube.k8s.io/updated_at=2023_09_25T03_34_13_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:13.004757    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:13.008164    1555 ops.go:34] apiserver oom_adj: -16
	I0925 03:34:13.063625    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:13.095139    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:13.629666    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:14.129649    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:14.629662    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:15.129655    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:15.629628    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:16.129723    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:16.629660    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:17.129683    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:17.629643    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:18.129619    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:18.629638    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:19.129594    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:19.629589    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:20.129625    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:20.629540    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:21.129598    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:21.629573    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:22.129550    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:22.629493    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:23.129517    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:23.629511    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:24.129464    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:24.629448    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:25.129565    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:25.629529    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:26.129496    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:26.629436    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:26.667079    1555 kubeadm.go:1081] duration metric: took 13.662618083s to wait for elevateKubeSystemPrivileges.
	I0925 03:34:26.667097    1555 kubeadm.go:406] StartCluster complete in 21.003673917s
	I0925 03:34:26.667106    1555 settings.go:142] acquiring lock: {Name:mkb5a0822179f07ef9369c44aa9b64eb9ef74eed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 03:34:26.667266    1555 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/17297-1010/kubeconfig
	I0925 03:34:26.667431    1555 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17297-1010/kubeconfig: {Name:mkaa9d09ca2bf27c1a43efc9acf938adcc68343d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 03:34:26.667677    1555 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0925 03:34:26.667722    1555 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:true]
	I0925 03:34:26.667779    1555 addons.go:69] Setting volumesnapshots=true in profile "addons-183000"
	I0925 03:34:26.667782    1555 addons.go:69] Setting cloud-spanner=true in profile "addons-183000"
	I0925 03:34:26.667785    1555 addons.go:231] Setting addon volumesnapshots=true in "addons-183000"
	I0925 03:34:26.667789    1555 addons.go:231] Setting addon cloud-spanner=true in "addons-183000"
	I0925 03:34:26.667790    1555 addons.go:69] Setting default-storageclass=true in profile "addons-183000"
	I0925 03:34:26.667799    1555 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-183000"
	I0925 03:34:26.667820    1555 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-183000"
	I0925 03:34:26.667848    1555 host.go:66] Checking if "addons-183000" exists ...
	I0925 03:34:26.667850    1555 addons.go:69] Setting registry=true in profile "addons-183000"
	I0925 03:34:26.667858    1555 addons.go:231] Setting addon registry=true in "addons-183000"
	I0925 03:34:26.667849    1555 addons.go:69] Setting metrics-server=true in profile "addons-183000"
	I0925 03:34:26.667873    1555 addons.go:231] Setting addon metrics-server=true in "addons-183000"
	I0925 03:34:26.667880    1555 host.go:66] Checking if "addons-183000" exists ...
	I0925 03:34:26.667881    1555 addons.go:69] Setting gcp-auth=true in profile "addons-183000"
	I0925 03:34:26.667902    1555 mustload.go:65] Loading cluster: addons-183000
	I0925 03:34:26.667915    1555 host.go:66] Checking if "addons-183000" exists ...
	I0925 03:34:26.667948    1555 host.go:66] Checking if "addons-183000" exists ...
	I0925 03:34:26.667977    1555 config.go:182] Loaded profile config "addons-183000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0925 03:34:26.668033    1555 addons.go:69] Setting ingress-dns=true in profile "addons-183000"
	I0925 03:34:26.668035    1555 addons.go:69] Setting inspektor-gadget=true in profile "addons-183000"
	I0925 03:34:26.668042    1555 addons.go:69] Setting storage-provisioner=true in profile "addons-183000"
	I0925 03:34:26.668047    1555 addons.go:231] Setting addon storage-provisioner=true in "addons-183000"
	I0925 03:34:26.668049    1555 addons.go:231] Setting addon inspektor-gadget=true in "addons-183000"
	I0925 03:34:26.668059    1555 host.go:66] Checking if "addons-183000" exists ...
	I0925 03:34:26.668076    1555 host.go:66] Checking if "addons-183000" exists ...
	W0925 03:34:26.668189    1555 host.go:54] host status for "addons-183000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/monitor: connect: connection refused
	W0925 03:34:26.668197    1555 addons.go:277] "addons-183000" is not running, setting volumesnapshots=true and skipping enablement (err=<nil>)
	I0925 03:34:26.667780    1555 addons.go:69] Setting ingress=true in profile "addons-183000"
	I0925 03:34:26.668202    1555 addons.go:231] Setting addon ingress=true in "addons-183000"
	I0925 03:34:26.668215    1555 host.go:66] Checking if "addons-183000" exists ...
	W0925 03:34:26.668271    1555 host.go:54] host status for "addons-183000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/monitor: connect: connection refused
	W0925 03:34:26.668277    1555 addons.go:277] "addons-183000" is not running, setting storage-provisioner=true and skipping enablement (err=<nil>)
	I0925 03:34:26.668038    1555 addons.go:231] Setting addon ingress-dns=true in "addons-183000"
	I0925 03:34:26.668289    1555 host.go:66] Checking if "addons-183000" exists ...
	I0925 03:34:26.667873    1555 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-183000"
	I0925 03:34:26.668351    1555 host.go:66] Checking if "addons-183000" exists ...
	W0925 03:34:26.668420    1555 host.go:54] host status for "addons-183000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/monitor: connect: connection refused
	W0925 03:34:26.668426    1555 addons.go:277] "addons-183000" is not running, setting ingress=true and skipping enablement (err=<nil>)
	I0925 03:34:26.668428    1555 addons.go:467] Verifying addon ingress=true in "addons-183000"
	I0925 03:34:26.671815    1555 out.go:177] * Verifying ingress addon...
	I0925 03:34:26.668077    1555 config.go:182] Loaded profile config "addons-183000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	W0925 03:34:26.668443    1555 host.go:54] host status for "addons-183000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/monitor: connect: connection refused
	W0925 03:34:26.668492    1555 host.go:54] host status for "addons-183000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/monitor: connect: connection refused
	W0925 03:34:26.668560    1555 host.go:54] host status for "addons-183000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/monitor: connect: connection refused
	W0925 03:34:26.668562    1555 host.go:54] host status for "addons-183000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/monitor: connect: connection refused
	W0925 03:34:26.668565    1555 host.go:54] host status for "addons-183000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/monitor: connect: connection refused
	I0925 03:34:26.674660    1555 addons.go:231] Setting addon default-storageclass=true in "addons-183000"
	W0925 03:34:26.679882    1555 addons.go:277] "addons-183000" is not running, setting ingress-dns=true and skipping enablement (err=<nil>)
	W0925 03:34:26.679903    1555 addons.go:277] "addons-183000" is not running, setting csi-hostpath-driver=true and skipping enablement (err=<nil>)
	W0925 03:34:26.679909    1555 addons.go:277] "addons-183000" is not running, setting cloud-spanner=true and skipping enablement (err=<nil>)
	W0925 03:34:26.679910    1555 addons.go:277] "addons-183000" is not running, setting metrics-server=true and skipping enablement (err=<nil>)
	W0925 03:34:26.679914    1555 addons.go:277] "addons-183000" is not running, setting registry=true and skipping enablement (err=<nil>)
	I0925 03:34:26.680408    1555 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0925 03:34:26.680587    1555 host.go:66] Checking if "addons-183000" exists ...
	I0925 03:34:26.685884    1555 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-183000"
	I0925 03:34:26.691851    1555 out.go:177] * Verifying csi-hostpath-driver addon...
	I0925 03:34:26.685950    1555 addons.go:467] Verifying addon metrics-server=true in "addons-183000"
	I0925 03:34:26.685956    1555 addons.go:467] Verifying addon registry=true in "addons-183000"
	I0925 03:34:26.685976    1555 host.go:66] Checking if "addons-183000" exists ...
	I0925 03:34:26.685980    1555 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.20.0
	I0925 03:34:26.693878    1555 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0925 03:34:26.696802    1555 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-183000" context rescaled to 1 replicas
	I0925 03:34:26.698859    1555 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 03:34:26.700109    1555 out.go:177] * Verifying Kubernetes components...
	I0925 03:34:26.699453    1555 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0925 03:34:26.699742    1555 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0925 03:34:26.709918    1555 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0925 03:34:26.713912    1555 out.go:177] * Verifying registry addon...
	I0925 03:34:26.717867    1555 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0925 03:34:26.720802    1555 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/id_rsa Username:docker}
	I0925 03:34:26.717891    1555 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0925 03:34:26.720819    1555 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0925 03:34:26.720825    1555 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/id_rsa Username:docker}
	I0925 03:34:26.721266    1555 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0925 03:34:26.726699    1555 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=registry
	I0925 03:34:26.728776    1555 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0925 03:34:26.751434    1555 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0925 03:34:26.751798    1555 node_ready.go:35] waiting up to 6m0s for node "addons-183000" to be "Ready" ...
	I0925 03:34:26.753298    1555 node_ready.go:49] node "addons-183000" has status "Ready":"True"
	I0925 03:34:26.753320    1555 node_ready.go:38] duration metric: took 1.500542ms waiting for node "addons-183000" to be "Ready" ...
	I0925 03:34:26.753326    1555 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0925 03:34:26.756603    1555 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-nj9v5" in "kube-system" namespace to be "Ready" ...
	I0925 03:34:26.894346    1555 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0925 03:34:26.894357    1555 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0925 03:34:26.894362    1555 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0925 03:34:26.913613    1555 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I0925 03:34:26.913623    1555 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0925 03:34:26.955544    1555 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0925 03:34:26.955558    1555 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0925 03:34:26.966254    1555 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0925 03:34:26.966263    1555 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0925 03:34:26.970978    1555 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0925 03:34:26.970984    1555 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0925 03:34:26.980045    1555 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I0925 03:34:26.980056    1555 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0925 03:34:27.011877    1555 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0925 03:34:27.011886    1555 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7741 bytes)
	I0925 03:34:27.035496    1555 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0925 03:34:27.284243    1555 start.go:923] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I0925 03:34:28.770683    1555 pod_ready.go:102] pod "coredns-5dd5756b68-nj9v5" in "kube-system" namespace has status "Ready":"False"
	I0925 03:34:30.771066    1555 pod_ready.go:102] pod "coredns-5dd5756b68-nj9v5" in "kube-system" namespace has status "Ready":"False"
	I0925 03:34:33.271406    1555 pod_ready.go:102] pod "coredns-5dd5756b68-nj9v5" in "kube-system" namespace has status "Ready":"False"
	I0925 03:34:33.290034    1555 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0925 03:34:33.290047    1555 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/id_rsa Username:docker}
	I0925 03:34:33.333376    1555 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0925 03:34:33.340520    1555 addons.go:231] Setting addon gcp-auth=true in "addons-183000"
	I0925 03:34:33.340540    1555 host.go:66] Checking if "addons-183000" exists ...
	I0925 03:34:33.341291    1555 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0925 03:34:33.341299    1555 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/id_rsa Username:docker}
	I0925 03:34:33.385047    1555 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0925 03:34:33.390017    1555 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0925 03:34:33.393078    1555 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0925 03:34:33.393083    1555 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0925 03:34:33.401443    1555 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0925 03:34:33.401449    1555 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0925 03:34:33.408814    1555 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0925 03:34:33.408821    1555 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5412 bytes)
	I0925 03:34:33.415868    1555 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0925 03:34:33.956480    1555 addons.go:467] Verifying addon gcp-auth=true in "addons-183000"
	I0925 03:34:33.962940    1555 out.go:177] * Verifying gcp-auth addon...
	I0925 03:34:33.970267    1555 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0925 03:34:33.972814    1555 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0925 03:34:33.972821    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:33.975859    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:34.479146    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:34.978976    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:35.477962    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:35.770777    1555 pod_ready.go:102] pod "coredns-5dd5756b68-nj9v5" in "kube-system" namespace has status "Ready":"False"
	I0925 03:34:35.978841    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:36.478564    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:36.978738    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:37.478896    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:37.978838    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:38.273778    1555 pod_ready.go:102] pod "coredns-5dd5756b68-nj9v5" in "kube-system" namespace has status "Ready":"False"
	I0925 03:34:38.478811    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:38.770881    1555 pod_ready.go:92] pod "coredns-5dd5756b68-nj9v5" in "kube-system" namespace has status "Ready":"True"
	I0925 03:34:38.770889    1555 pod_ready.go:81] duration metric: took 12.014493833s waiting for pod "coredns-5dd5756b68-nj9v5" in "kube-system" namespace to be "Ready" ...
	I0925 03:34:38.770893    1555 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-183000" in "kube-system" namespace to be "Ready" ...
	I0925 03:34:38.773593    1555 pod_ready.go:92] pod "etcd-addons-183000" in "kube-system" namespace has status "Ready":"True"
	I0925 03:34:38.773599    1555 pod_ready.go:81] duration metric: took 2.702459ms waiting for pod "etcd-addons-183000" in "kube-system" namespace to be "Ready" ...
	I0925 03:34:38.773602    1555 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-183000" in "kube-system" namespace to be "Ready" ...
	I0925 03:34:38.775799    1555 pod_ready.go:92] pod "kube-apiserver-addons-183000" in "kube-system" namespace has status "Ready":"True"
	I0925 03:34:38.775804    1555 pod_ready.go:81] duration metric: took 2.198875ms waiting for pod "kube-apiserver-addons-183000" in "kube-system" namespace to be "Ready" ...
	I0925 03:34:38.775808    1555 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-183000" in "kube-system" namespace to be "Ready" ...
	I0925 03:34:38.777922    1555 pod_ready.go:92] pod "kube-controller-manager-addons-183000" in "kube-system" namespace has status "Ready":"True"
	I0925 03:34:38.777929    1555 pod_ready.go:81] duration metric: took 2.118625ms waiting for pod "kube-controller-manager-addons-183000" in "kube-system" namespace to be "Ready" ...
	I0925 03:34:38.777933    1555 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7t7bh" in "kube-system" namespace to be "Ready" ...
	I0925 03:34:38.780129    1555 pod_ready.go:92] pod "kube-proxy-7t7bh" in "kube-system" namespace has status "Ready":"True"
	I0925 03:34:38.780136    1555 pod_ready.go:81] duration metric: took 2.199875ms waiting for pod "kube-proxy-7t7bh" in "kube-system" namespace to be "Ready" ...
	I0925 03:34:38.780139    1555 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-183000" in "kube-system" namespace to be "Ready" ...
	I0925 03:34:38.977389    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:39.173086    1555 pod_ready.go:92] pod "kube-scheduler-addons-183000" in "kube-system" namespace has status "Ready":"True"
	I0925 03:34:39.173096    1555 pod_ready.go:81] duration metric: took 392.960166ms waiting for pod "kube-scheduler-addons-183000" in "kube-system" namespace to be "Ready" ...
	I0925 03:34:39.173100    1555 pod_ready.go:38] duration metric: took 12.419997458s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0925 03:34:39.173111    1555 api_server.go:52] waiting for apiserver process to appear ...
	I0925 03:34:39.173181    1555 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0925 03:34:39.178068    1555 api_server.go:72] duration metric: took 12.479424625s to wait for apiserver process to appear ...
	I0925 03:34:39.178075    1555 api_server.go:88] waiting for apiserver healthz status ...
	I0925 03:34:39.178081    1555 api_server.go:253] Checking apiserver healthz at https://192.168.105.2:8443/healthz ...
	I0925 03:34:39.182471    1555 api_server.go:279] https://192.168.105.2:8443/healthz returned 200:
	ok
	I0925 03:34:39.183204    1555 api_server.go:141] control plane version: v1.28.2
	I0925 03:34:39.183210    1555 api_server.go:131] duration metric: took 5.132042ms to wait for apiserver health ...
	I0925 03:34:39.183213    1555 system_pods.go:43] waiting for kube-system pods to appear ...
	I0925 03:34:39.372354    1555 system_pods.go:59] 6 kube-system pods found
	I0925 03:34:39.372365    1555 system_pods.go:61] "coredns-5dd5756b68-nj9v5" [b1bb0e62-0339-479f-9572-1e07ab015a1d] Running
	I0925 03:34:39.372368    1555 system_pods.go:61] "etcd-addons-183000" [98901ac9-8165-4fad-b6a6-6c757da8e783] Running
	I0925 03:34:39.372371    1555 system_pods.go:61] "kube-apiserver-addons-183000" [b3899bc1-2055-47fb-aded-8cc3e5ca8b22] Running
	I0925 03:34:39.372373    1555 system_pods.go:61] "kube-controller-manager-addons-183000" [12803b97-0e90-4869-a114-2dce351af701] Running
	I0925 03:34:39.372376    1555 system_pods.go:61] "kube-proxy-7t7bh" [b51c70db-a512-4aae-af91-8b45e6ce9f89] Running
	I0925 03:34:39.372378    1555 system_pods.go:61] "kube-scheduler-addons-183000" [543428f6-b6ce-448c-9d3e-48c775396c75] Running
	I0925 03:34:39.372382    1555 system_pods.go:74] duration metric: took 189.166917ms to wait for pod list to return data ...
	I0925 03:34:39.372386    1555 default_sa.go:34] waiting for default service account to be created ...
	I0925 03:34:39.478483    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:39.569942    1555 default_sa.go:45] found service account: "default"
	I0925 03:34:39.569952    1555 default_sa.go:55] duration metric: took 197.566292ms for default service account to be created ...
	I0925 03:34:39.569955    1555 system_pods.go:116] waiting for k8s-apps to be running ...
	I0925 03:34:39.771555    1555 system_pods.go:86] 6 kube-system pods found
	I0925 03:34:39.771566    1555 system_pods.go:89] "coredns-5dd5756b68-nj9v5" [b1bb0e62-0339-479f-9572-1e07ab015a1d] Running
	I0925 03:34:39.771569    1555 system_pods.go:89] "etcd-addons-183000" [98901ac9-8165-4fad-b6a6-6c757da8e783] Running
	I0925 03:34:39.771571    1555 system_pods.go:89] "kube-apiserver-addons-183000" [b3899bc1-2055-47fb-aded-8cc3e5ca8b22] Running
	I0925 03:34:39.771573    1555 system_pods.go:89] "kube-controller-manager-addons-183000" [12803b97-0e90-4869-a114-2dce351af701] Running
	I0925 03:34:39.771576    1555 system_pods.go:89] "kube-proxy-7t7bh" [b51c70db-a512-4aae-af91-8b45e6ce9f89] Running
	I0925 03:34:39.771579    1555 system_pods.go:89] "kube-scheduler-addons-183000" [543428f6-b6ce-448c-9d3e-48c775396c75] Running
	I0925 03:34:39.771582    1555 system_pods.go:126] duration metric: took 201.627792ms to wait for k8s-apps to be running ...
	I0925 03:34:39.771585    1555 system_svc.go:44] waiting for kubelet service to be running ....
	I0925 03:34:39.771649    1555 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0925 03:34:39.777059    1555 system_svc.go:56] duration metric: took 5.471834ms WaitForService to wait for kubelet.
	I0925 03:34:39.777072    1555 kubeadm.go:581] duration metric: took 13.078440792s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0925 03:34:39.777081    1555 node_conditions.go:102] verifying NodePressure condition ...
	I0925 03:34:39.970496    1555 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0925 03:34:39.970507    1555 node_conditions.go:123] node cpu capacity is 2
	I0925 03:34:39.970512    1555 node_conditions.go:105] duration metric: took 193.43225ms to run NodePressure ...
	I0925 03:34:39.970518    1555 start.go:228] waiting for startup goroutines ...
	I0925 03:34:39.977869    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:40.478718    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:40.978494    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:41.478330    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:41.978723    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:42.478484    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:42.978499    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:43.478310    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:43.978560    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:44.478626    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:44.978747    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:45.478652    1555 kapi.go:107] duration metric: took 11.508592542s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0925 03:34:45.482917    1555 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-183000 cluster.
	I0925 03:34:45.486908    1555 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0925 03:34:45.489839    1555 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0925 03:40:26.681420    1555 kapi.go:107] duration metric: took 6m0.007630792s to wait for app.kubernetes.io/name=ingress-nginx ...
	W0925 03:40:26.681519    1555 out.go:239] ! Enabling 'ingress' returned an error: running callbacks: [waiting for app.kubernetes.io/name=ingress-nginx pods: context deadline exceeded]
	I0925 03:40:26.713271    1555 kapi.go:107] duration metric: took 6m0.020443166s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	W0925 03:40:26.713301    1555 out.go:239] ! Enabling 'csi-hostpath-driver' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=csi-hostpath-driver pods: context deadline exceeded]
	I0925 03:40:26.715027    1555 kapi.go:107] duration metric: took 6m0.000386167s to wait for kubernetes.io/minikube-addons=registry ...
	W0925 03:40:26.715058    1555 out.go:239] ! Enabling 'registry' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=registry pods: context deadline exceeded]
	I0925 03:40:26.720408    1555 out.go:177] * Enabled addons: volumesnapshots, storage-provisioner, cloud-spanner, ingress-dns, metrics-server, default-storageclass, inspektor-gadget, gcp-auth
	I0925 03:40:26.729284    1555 addons.go:502] enable addons completed in 6m0.068199458s: enabled=[volumesnapshots storage-provisioner cloud-spanner ingress-dns metrics-server default-storageclass inspektor-gadget gcp-auth]
	I0925 03:40:26.729295    1555 start.go:233] waiting for cluster config update ...
	I0925 03:40:26.729300    1555 start.go:242] writing updated cluster config ...
	I0925 03:40:26.729761    1555 ssh_runner.go:195] Run: rm -f paused
	I0925 03:40:26.760421    1555 start.go:600] kubectl: 1.27.2, cluster: 1.28.2 (minor skew: 1)
	I0925 03:40:26.764251    1555 out.go:177] * Done! kubectl is now configured to use "addons-183000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Mon 2023-09-25 10:33:55 UTC, ends at Mon 2023-09-25 10:52:27 UTC. --
	Sep 25 10:34:40 addons-183000 dockerd[1111]: time="2023-09-25T10:34:40.925249212Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 25 10:34:40 addons-183000 dockerd[1105]: time="2023-09-25T10:34:40.949634588Z" level=info msg="ignoring event" container=eba229bd5f5438b2796da9556e96ba7c846a14868d7823776ae986f837970ff9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 25 10:34:40 addons-183000 dockerd[1111]: time="2023-09-25T10:34:40.949715383Z" level=info msg="shim disconnected" id=eba229bd5f5438b2796da9556e96ba7c846a14868d7823776ae986f837970ff9 namespace=moby
	Sep 25 10:34:40 addons-183000 dockerd[1111]: time="2023-09-25T10:34:40.949742385Z" level=warning msg="cleaning up after shim disconnected" id=eba229bd5f5438b2796da9556e96ba7c846a14868d7823776ae986f837970ff9 namespace=moby
	Sep 25 10:34:40 addons-183000 dockerd[1111]: time="2023-09-25T10:34:40.949747543Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 25 10:34:42 addons-183000 dockerd[1111]: time="2023-09-25T10:34:42.189709501Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 25 10:34:42 addons-183000 dockerd[1111]: time="2023-09-25T10:34:42.189752119Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 25 10:34:42 addons-183000 dockerd[1111]: time="2023-09-25T10:34:42.189766269Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 25 10:34:42 addons-183000 dockerd[1111]: time="2023-09-25T10:34:42.189777049Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 25 10:34:42 addons-183000 cri-dockerd[998]: time="2023-09-25T10:34:42Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/217fc96b3ae84c57e9bdecb97411c0563e8969ebf62505726cfe663cf31c1941/resolv.conf as [nameserver 10.96.0.10 search gcp-auth.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Sep 25 10:34:42 addons-183000 dockerd[1105]: time="2023-09-25T10:34:42.358172410Z" level=info msg="ignoring event" container=d009b921a4cc83c6746a6427d33a20b5315cc03832a52dae5f1cc5bda62fc19b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 25 10:34:42 addons-183000 dockerd[1111]: time="2023-09-25T10:34:42.358246450Z" level=info msg="shim disconnected" id=d009b921a4cc83c6746a6427d33a20b5315cc03832a52dae5f1cc5bda62fc19b namespace=moby
	Sep 25 10:34:42 addons-183000 dockerd[1111]: time="2023-09-25T10:34:42.358270381Z" level=warning msg="cleaning up after shim disconnected" id=d009b921a4cc83c6746a6427d33a20b5315cc03832a52dae5f1cc5bda62fc19b namespace=moby
	Sep 25 10:34:42 addons-183000 dockerd[1111]: time="2023-09-25T10:34:42.358274585Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 25 10:34:42 addons-183000 dockerd[1111]: time="2023-09-25T10:34:42.372096465Z" level=warning msg="cleanup warnings time=\"2023-09-25T10:34:42Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Sep 25 10:34:42 addons-183000 dockerd[1105]: time="2023-09-25T10:34:42.400036385Z" level=warning msg="reference for unknown type: " digest="sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf" remote="gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf"
	Sep 25 10:34:42 addons-183000 dockerd[1105]: time="2023-09-25T10:34:42.404130643Z" level=info msg="ignoring event" container=d09446869187232df599a79609b2cc6878507cb6c4070aec9a79632485a47117 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 25 10:34:42 addons-183000 dockerd[1111]: time="2023-09-25T10:34:42.404287046Z" level=info msg="shim disconnected" id=d09446869187232df599a79609b2cc6878507cb6c4070aec9a79632485a47117 namespace=moby
	Sep 25 10:34:42 addons-183000 dockerd[1111]: time="2023-09-25T10:34:42.404320674Z" level=warning msg="cleaning up after shim disconnected" id=d09446869187232df599a79609b2cc6878507cb6c4070aec9a79632485a47117 namespace=moby
	Sep 25 10:34:42 addons-183000 dockerd[1111]: time="2023-09-25T10:34:42.404325086Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 25 10:34:44 addons-183000 cri-dockerd[998]: time="2023-09-25T10:34:44Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf: Status: Downloaded newer image for gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf"
	Sep 25 10:34:44 addons-183000 dockerd[1111]: time="2023-09-25T10:34:44.321936694Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 25 10:34:44 addons-183000 dockerd[1111]: time="2023-09-25T10:34:44.321971829Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 25 10:34:44 addons-183000 dockerd[1111]: time="2023-09-25T10:34:44.321982528Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 25 10:34:44 addons-183000 dockerd[1111]: time="2023-09-25T10:34:44.321989314Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f0ceeef2fd99f       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf        17 minutes ago      Running             gcp-auth                  0                   217fc96b3ae84       gcp-auth-d4c87556c-fgkgk
	3214d7d3645b3       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:01b7311f9512411ef6530e09dbdd3aeaea0abc4101227dbead4d44c36b255ca7   17 minutes ago      Running             gadget                    0                   1f38ec635c03d       gadget-dmqnx
	09ae8580d310e       97e04611ad434                                                                                                       17 minutes ago      Running             coredns                   0                   9802832060d13       coredns-5dd5756b68-nj9v5
	fff72387d957b       7da62c127fc0f                                                                                                       18 minutes ago      Running             kube-proxy                0                   2514b88f9fbec       kube-proxy-7t7bh
	e24563a552742       89d57b83c1786                                                                                                       18 minutes ago      Running             kube-controller-manager   0                   7170972f2383c       kube-controller-manager-addons-183000
	e38f0c6d58f79       30bb499447fe1                                                                                                       18 minutes ago      Running             kube-apiserver            0                   e3ec8dad501d8       kube-apiserver-addons-183000
	202a7fdac8250       9cdd6470f48c8                                                                                                       18 minutes ago      Running             etcd                      0                   f07db97eda3c5       etcd-addons-183000
	5a87dfcd0e1a4       64fc40cee3716                                                                                                       18 minutes ago      Running             kube-scheduler            0                   88f62df9ef878       kube-scheduler-addons-183000
	
	* 
	* ==> coredns [09ae8580d310] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:53855 - 12762 "HINFO IN 6175233926506353361.1980247959579836404. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.004134462s
	[INFO] 10.244.0.5:53045 - 37584 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000106198s
	[INFO] 10.244.0.5:58309 - 60928 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000170558s
	[INFO] 10.244.0.5:51843 - 23622 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000213104s
	[INFO] 10.244.0.5:42760 - 58990 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000042504s
	[INFO] 10.244.0.5:51340 - 46119 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00004929s
	[INFO] 10.244.0.5:39848 - 8379 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000023105s
	[INFO] 10.244.0.5:32887 - 31577 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001136668s
	[INFO] 10.244.0.5:49269 - 43084 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 458 0.001085546s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-183000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-183000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1bf6c3d5317028f348e55ea19d261973a6487d3c
	                    minikube.k8s.io/name=addons-183000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_25T03_34_13_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 25 Sep 2023 10:34:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-183000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 25 Sep 2023 10:52:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 25 Sep 2023 10:50:30 +0000   Mon, 25 Sep 2023 10:34:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 25 Sep 2023 10:50:30 +0000   Mon, 25 Sep 2023 10:34:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 25 Sep 2023 10:50:30 +0000   Mon, 25 Sep 2023 10:34:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 25 Sep 2023 10:50:30 +0000   Mon, 25 Sep 2023 10:34:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.2
	  Hostname:    addons-183000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	System Info:
	  Machine ID:                 3ec93b0c295a46b69f667e92919bae36
	  System UUID:                3ec93b0c295a46b69f667e92919bae36
	  Boot ID:                    e140f335-14d6-4d36-af6f-4c16a72ee860
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.6
	  Kubelet Version:            v1.28.2
	  Kube-Proxy Version:         v1.28.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  gadget                      gadget-dmqnx                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  gcp-auth                    gcp-auth-d4c87556c-fgkgk                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 coredns-5dd5756b68-nj9v5                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     18m
	  kube-system                 etcd-addons-183000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         18m
	  kube-system                 kube-apiserver-addons-183000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-controller-manager-addons-183000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-proxy-7t7bh                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-scheduler-addons-183000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 18m   kube-proxy       
	  Normal  Starting                 18m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  18m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  18m   kubelet          Node addons-183000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m   kubelet          Node addons-183000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m   kubelet          Node addons-183000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                18m   kubelet          Node addons-183000 status is now: NodeReady
	  Normal  RegisteredNode           18m   node-controller  Node addons-183000 event: Registered Node addons-183000 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000000] KASLR disabled due to lack of seed
	[  +0.641440] EINJ: EINJ table not found.
	[  +0.489201] systemd-fstab-generator[117]: Ignoring "noauto" for root device
	[  +0.043090] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000792] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +3.110509] systemd-fstab-generator[482]: Ignoring "noauto" for root device
	[  +0.074666] systemd-fstab-generator[494]: Ignoring "noauto" for root device
	[  +0.418795] systemd-fstab-generator[667]: Ignoring "noauto" for root device
	[  +0.183648] systemd-fstab-generator[704]: Ignoring "noauto" for root device
	[  +0.073331] systemd-fstab-generator[715]: Ignoring "noauto" for root device
	[  +0.088908] systemd-fstab-generator[728]: Ignoring "noauto" for root device
	[  +1.149460] kauditd_printk_skb: 17 callbacks suppressed
	[  +0.104006] systemd-fstab-generator[917]: Ignoring "noauto" for root device
	[  +0.078468] systemd-fstab-generator[928]: Ignoring "noauto" for root device
	[  +0.058376] systemd-fstab-generator[939]: Ignoring "noauto" for root device
	[  +0.070842] systemd-fstab-generator[950]: Ignoring "noauto" for root device
	[  +0.085054] systemd-fstab-generator[991]: Ignoring "noauto" for root device
	[Sep25 10:34] systemd-fstab-generator[1098]: Ignoring "noauto" for root device
	[  +2.191489] kauditd_printk_skb: 29 callbacks suppressed
	[  +2.399489] systemd-fstab-generator[1471]: Ignoring "noauto" for root device
	[  +5.122490] systemd-fstab-generator[2347]: Ignoring "noauto" for root device
	[ +14.463207] kauditd_printk_skb: 41 callbacks suppressed
	[  +6.798894] kauditd_printk_skb: 21 callbacks suppressed
	[  +4.810513] TCP: eth0: Driver has suspect GRO implementation, TCP performance may be compromised.
	[  +3.498700] kauditd_printk_skb: 12 callbacks suppressed
	
	* 
	* ==> etcd [202a7fdac825] <==
	* {"level":"info","ts":"2023-09-25T10:34:09.756472Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6e03e7863b4f9c54","local-member-id":"c46d288d2fcb0590","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-25T10:34:09.756619Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-25T10:34:09.756498Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-25T10:34:09.756725Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-25T10:34:09.756515Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-25T10:34:09.757894Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-09-25T10:34:09.756532Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-25T10:34:09.758174Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-09-25T10:34:09.757894Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.2:2379"}
	{"level":"info","ts":"2023-09-25T10:34:31.937317Z","caller":"traceutil/trace.go:171","msg":"trace[667548922] transaction","detail":"{read_only:false; response_revision:416; number_of_response:1; }","duration":"126.937574ms","start":"2023-09-25T10:34:31.810371Z","end":"2023-09-25T10:34:31.937309Z","steps":["trace[667548922] 'process raft request'  (duration: 126.824018ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-25T10:34:36.99882Z","caller":"traceutil/trace.go:171","msg":"trace[243510449] linearizableReadLoop","detail":"{readStateIndex:482; appliedIndex:481; }","duration":"165.982552ms","start":"2023-09-25T10:34:36.832829Z","end":"2023-09-25T10:34:36.998811Z","steps":["trace[243510449] 'read index received'  (duration: 165.770762ms)","trace[243510449] 'applied index is now lower than readState.Index'  (duration: 211.209µs)"],"step_count":2}
	{"level":"warn","ts":"2023-09-25T10:34:36.998969Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"166.151453ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2023-09-25T10:34:36.999019Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"123.796797ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-5dd5756b68-nj9v5\" ","response":"range_response_count:1 size:5002"}
	{"level":"info","ts":"2023-09-25T10:34:36.999045Z","caller":"traceutil/trace.go:171","msg":"trace[2057756314] range","detail":"{range_begin:/registry/pods/kube-system/coredns-5dd5756b68-nj9v5; range_end:; response_count:1; response_revision:469; }","duration":"123.811177ms","start":"2023-09-25T10:34:36.875219Z","end":"2023-09-25T10:34:36.99903Z","steps":["trace[2057756314] 'agreement among raft nodes before linearized reading'  (duration: 123.788776ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-25T10:34:36.999164Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"162.803156ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:6 size:31393"}
	{"level":"info","ts":"2023-09-25T10:34:36.999205Z","caller":"traceutil/trace.go:171","msg":"trace[1483278895] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:6; response_revision:469; }","duration":"162.834825ms","start":"2023-09-25T10:34:36.836356Z","end":"2023-09-25T10:34:36.99919Z","steps":["trace[1483278895] 'agreement among raft nodes before linearized reading'  (duration: 162.701625ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-25T10:34:36.999Z","caller":"traceutil/trace.go:171","msg":"trace[3634572] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:0; response_revision:469; }","duration":"166.183579ms","start":"2023-09-25T10:34:36.832812Z","end":"2023-09-25T10:34:36.998995Z","steps":["trace[3634572] 'agreement among raft nodes before linearized reading'  (duration: 166.053912ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-25T10:34:36.998947Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"163.574471ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:6 size:31393"}
	{"level":"info","ts":"2023-09-25T10:34:36.999285Z","caller":"traceutil/trace.go:171","msg":"trace[819315326] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:6; response_revision:469; }","duration":"163.922954ms","start":"2023-09-25T10:34:36.83536Z","end":"2023-09-25T10:34:36.999283Z","steps":["trace[819315326] 'agreement among raft nodes before linearized reading'  (duration: 163.541307ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-25T10:44:09.779305Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":608}
	{"level":"info","ts":"2023-09-25T10:44:09.779775Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":608,"took":"346.08µs","hash":977468107}
	{"level":"info","ts":"2023-09-25T10:44:09.779794Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":977468107,"revision":608,"compact-revision":-1}
	{"level":"info","ts":"2023-09-25T10:49:09.783821Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":698}
	{"level":"info","ts":"2023-09-25T10:49:09.784257Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":698,"took":"244.664µs","hash":3592134345}
	{"level":"info","ts":"2023-09-25T10:49:09.784273Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3592134345,"revision":698,"compact-revision":608}
	
	* 
	* ==> gcp-auth [f0ceeef2fd99] <==
	* 2023/09/25 10:34:44 GCP Auth Webhook started!
	
	* 
	* ==> kernel <==
	*  10:52:27 up 18 min,  0 users,  load average: 0.01, 0.10, 0.10
	Linux addons-183000 5.10.57 #1 SMP PREEMPT Mon Sep 18 20:10:16 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [e38f0c6d58f7] <==
	* I0925 10:34:10.471968       1 aggregator.go:166] initial CRD sync complete...
	I0925 10:34:10.471973       1 autoregister_controller.go:141] Starting autoregister controller
	I0925 10:34:10.471976       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0925 10:34:10.471978       1 cache.go:39] Caches are synced for autoregister controller
	I0925 10:34:11.347033       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0925 10:34:11.348323       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0925 10:34:11.348329       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0925 10:34:11.481757       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0925 10:34:11.494186       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0925 10:34:11.536039       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0925 10:34:11.538110       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.105.2]
	I0925 10:34:11.538484       1 controller.go:624] quota admission added evaluator for: endpoints
	I0925 10:34:11.539858       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0925 10:34:12.380709       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0925 10:34:12.885080       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0925 10:34:12.893075       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0925 10:34:12.904077       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0925 10:34:26.498134       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0925 10:34:26.509156       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I0925 10:34:27.526494       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0925 10:34:34.002889       1 alloc.go:330] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.108.108.100"}
	I0925 10:34:34.022823       1 controller.go:624] quota admission added evaluator for: jobs.batch
	I0925 10:39:10.399639       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0925 10:44:10.399758       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0925 10:49:10.400399       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	
	* 
	* ==> kube-controller-manager [e24563a55274] <==
	* I0925 10:34:41.204786       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0925 10:34:41.209581       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0925 10:34:42.391453       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0925 10:34:42.427746       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0925 10:34:43.302972       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0925 10:34:43.306540       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0925 10:34:43.393720       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0925 10:34:43.396202       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0925 10:34:43.398035       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0925 10:34:43.398118       1 event.go:307] "Event occurred" object="gcp-auth/gcp-auth-certs-create" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0925 10:34:43.433755       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0925 10:34:43.448762       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0925 10:34:43.451889       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0925 10:34:43.452265       1 event.go:307] "Event occurred" object="gcp-auth/gcp-auth-certs-patch" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0925 10:34:45.326036       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="gcp-auth/gcp-auth-d4c87556c" duration="3.706108ms"
	I0925 10:34:45.326133       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="gcp-auth/gcp-auth-d4c87556c" duration="15.863µs"
	I0925 10:34:56.694544       1 resource_quota_monitor.go:210] "QuotaMonitor created object count evaluator" resource="traces.gadget.kinvolk.io"
	I0925 10:34:56.694776       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0925 10:34:56.795338       1 shared_informer.go:318] Caches are synced for resource quota
	I0925 10:34:57.011337       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0925 10:34:57.011353       1 shared_informer.go:318] Caches are synced for garbage collector
	I0925 10:35:13.027887       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0925 10:35:13.028052       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0925 10:35:13.045569       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0925 10:35:13.045933       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	
	* 
	* ==> kube-proxy [fff72387d957] <==
	* I0925 10:34:27.163880       1 server_others.go:69] "Using iptables proxy"
	I0925 10:34:27.181208       1 node.go:141] Successfully retrieved node IP: 192.168.105.2
	I0925 10:34:27.228178       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0925 10:34:27.228201       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0925 10:34:27.231917       1 server_others.go:152] "Using iptables Proxier"
	I0925 10:34:27.231983       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0925 10:34:27.232100       1 server.go:846] "Version info" version="v1.28.2"
	I0925 10:34:27.232211       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0925 10:34:27.232663       1 config.go:188] "Starting service config controller"
	I0925 10:34:27.232700       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0925 10:34:27.232734       1 config.go:97] "Starting endpoint slice config controller"
	I0925 10:34:27.232760       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0925 10:34:27.233047       1 config.go:315] "Starting node config controller"
	I0925 10:34:27.233085       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0925 10:34:27.333424       1 shared_informer.go:318] Caches are synced for node config
	I0925 10:34:27.333462       1 shared_informer.go:318] Caches are synced for service config
	I0925 10:34:27.333490       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [5a87dfcd0e1a] <==
	* W0925 10:34:10.412769       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0925 10:34:10.413000       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0925 10:34:10.412552       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0925 10:34:10.413020       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0925 10:34:10.412572       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0925 10:34:10.413082       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0925 10:34:10.412878       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0925 10:34:10.413107       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0925 10:34:11.233945       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0925 10:34:11.233969       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0925 10:34:11.245555       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0925 10:34:11.245565       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0925 10:34:11.257234       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0925 10:34:11.257245       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0925 10:34:11.305366       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0925 10:34:11.305376       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0925 10:34:11.335532       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0925 10:34:11.335546       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0925 10:34:11.379250       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0925 10:34:11.379349       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0925 10:34:11.401540       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0925 10:34:11.401585       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0925 10:34:11.494359       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0925 10:34:11.494379       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0925 10:34:13.407721       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-09-25 10:33:55 UTC, ends at Mon 2023-09-25 10:52:27 UTC. --
	Sep 25 10:47:12 addons-183000 kubelet[2366]: E0925 10:47:12.961209    2366 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 25 10:47:12 addons-183000 kubelet[2366]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 25 10:47:12 addons-183000 kubelet[2366]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 25 10:47:12 addons-183000 kubelet[2366]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 25 10:48:12 addons-183000 kubelet[2366]: E0925 10:48:12.960980    2366 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 25 10:48:12 addons-183000 kubelet[2366]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 25 10:48:12 addons-183000 kubelet[2366]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 25 10:48:12 addons-183000 kubelet[2366]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 25 10:49:12 addons-183000 kubelet[2366]: E0925 10:49:12.961406    2366 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 25 10:49:12 addons-183000 kubelet[2366]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 25 10:49:12 addons-183000 kubelet[2366]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 25 10:49:12 addons-183000 kubelet[2366]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 25 10:49:12 addons-183000 kubelet[2366]: W0925 10:49:12.970035    2366 machine.go:65] Cannot read vendor id correctly, set empty.
	Sep 25 10:50:12 addons-183000 kubelet[2366]: E0925 10:50:12.960817    2366 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 25 10:50:12 addons-183000 kubelet[2366]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 25 10:50:12 addons-183000 kubelet[2366]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 25 10:50:12 addons-183000 kubelet[2366]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 25 10:51:12 addons-183000 kubelet[2366]: E0925 10:51:12.961216    2366 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 25 10:51:12 addons-183000 kubelet[2366]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 25 10:51:12 addons-183000 kubelet[2366]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 25 10:51:12 addons-183000 kubelet[2366]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 25 10:52:12 addons-183000 kubelet[2366]: E0925 10:52:12.961551    2366 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 25 10:52:12 addons-183000 kubelet[2366]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 25 10:52:12 addons-183000 kubelet[2366]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 25 10:52:12 addons-183000 kubelet[2366]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p addons-183000 -n addons-183000
helpers_test.go:261: (dbg) Run:  kubectl --context addons-183000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Registry FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Registry (720.89s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (0.77s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:183: (dbg) Run:  kubectl --context addons-183000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:183: (dbg) Non-zero exit: kubectl --context addons-183000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: exit status 1 (35.019041ms)

                                                
                                                
** stderr ** 
	error: no matching resources found

                                                
                                                
** /stderr **
addons_test.go:184: failed waiting for ingress-nginx-controller : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p addons-183000 -n addons-183000
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p addons-183000 logs -n 25
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-427000 | jenkins | v1.31.2 | 25 Sep 23 03:33 PDT |                     |
	|         | -p download-only-427000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| start   | -o=json --download-only        | download-only-427000 | jenkins | v1.31.2 | 25 Sep 23 03:33 PDT |                     |
	|         | -p download-only-427000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.31.2 | 25 Sep 23 03:33 PDT | 25 Sep 23 03:33 PDT |
	| delete  | -p download-only-427000        | download-only-427000 | jenkins | v1.31.2 | 25 Sep 23 03:33 PDT | 25 Sep 23 03:33 PDT |
	| delete  | -p download-only-427000        | download-only-427000 | jenkins | v1.31.2 | 25 Sep 23 03:33 PDT | 25 Sep 23 03:33 PDT |
	| start   | --download-only -p             | binary-mirror-317000 | jenkins | v1.31.2 | 25 Sep 23 03:33 PDT |                     |
	|         | binary-mirror-317000           |                      |         |         |                     |                     |
	|         | --alsologtostderr              |                      |         |         |                     |                     |
	|         | --binary-mirror                |                      |         |         |                     |                     |
	|         | http://127.0.0.1:49310         |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-317000        | binary-mirror-317000 | jenkins | v1.31.2 | 25 Sep 23 03:33 PDT | 25 Sep 23 03:33 PDT |
	| start   | -p addons-183000               | addons-183000        | jenkins | v1.31.2 | 25 Sep 23 03:33 PDT | 25 Sep 23 03:40 PDT |
	|         | --wait=true --memory=4000      |                      |         |         |                     |                     |
	|         | --alsologtostderr              |                      |         |         |                     |                     |
	|         | --addons=registry              |                      |         |         |                     |                     |
	|         | --addons=metrics-server        |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots       |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver   |                      |         |         |                     |                     |
	|         | --addons=gcp-auth              |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner         |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget      |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|         | --addons=ingress               |                      |         |         |                     |                     |
	|         | --addons=ingress-dns           |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p       | addons-183000        | jenkins | v1.31.2 | 25 Sep 23 03:52 PDT |                     |
	|         | addons-183000                  |                      |         |         |                     |                     |
	| addons  | enable headlamp                | addons-183000        | jenkins | v1.31.2 | 25 Sep 23 03:52 PDT | 25 Sep 23 03:52 PDT |
	|         | -p addons-183000               |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p    | addons-183000        | jenkins | v1.31.2 | 25 Sep 23 04:04 PDT | 25 Sep 23 04:04 PDT |
	|         | addons-183000                  |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/25 03:33:43
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.21.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0925 03:33:43.113263    1555 out.go:296] Setting OutFile to fd 1 ...
	I0925 03:33:43.113390    1555 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 03:33:43.113393    1555 out.go:309] Setting ErrFile to fd 2...
	I0925 03:33:43.113395    1555 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 03:33:43.113522    1555 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17297-1010/.minikube/bin
	I0925 03:33:43.114539    1555 out.go:303] Setting JSON to false
	I0925 03:33:43.129689    1555 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":198,"bootTime":1695637825,"procs":391,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.6","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0925 03:33:43.129759    1555 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0925 03:33:43.134529    1555 out.go:177] * [addons-183000] minikube v1.31.2 on Darwin 13.6 (arm64)
	I0925 03:33:43.141636    1555 out.go:177]   - MINIKUBE_LOCATION=17297
	I0925 03:33:43.145595    1555 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17297-1010/kubeconfig
	I0925 03:33:43.141675    1555 notify.go:220] Checking for updates...
	I0925 03:33:43.149882    1555 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0925 03:33:43.152528    1555 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 03:33:43.155561    1555 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17297-1010/.minikube
	I0925 03:33:43.158461    1555 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0925 03:33:43.161685    1555 driver.go:373] Setting default libvirt URI to qemu:///system
	I0925 03:33:43.165518    1555 out.go:177] * Using the qemu2 driver based on user configuration
	I0925 03:33:43.170494    1555 start.go:298] selected driver: qemu2
	I0925 03:33:43.170500    1555 start.go:902] validating driver "qemu2" against <nil>
	I0925 03:33:43.170505    1555 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0925 03:33:43.172415    1555 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0925 03:33:43.175485    1555 out.go:177] * Automatically selected the socket_vmnet network
	I0925 03:33:43.178631    1555 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0925 03:33:43.178656    1555 cni.go:84] Creating CNI manager for ""
	I0925 03:33:43.178667    1555 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0925 03:33:43.178671    1555 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0925 03:33:43.178683    1555 start_flags.go:321] config:
	{Name:addons-183000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:addons-183000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID
:0 AutoPauseInterval:1m0s}
	I0925 03:33:43.182821    1555 iso.go:125] acquiring lock: {Name:mkf881a60cf9fd1672567914305ff6f7a4f13809 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 03:33:43.186491    1555 out.go:177] * Starting control plane node addons-183000 in cluster addons-183000
	I0925 03:33:43.194499    1555 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0925 03:33:43.194520    1555 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I0925 03:33:43.194535    1555 cache.go:57] Caching tarball of preloaded images
	I0925 03:33:43.194599    1555 preload.go:174] Found /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0925 03:33:43.194605    1555 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I0925 03:33:43.194819    1555 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/config.json ...
	I0925 03:33:43.194831    1555 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/config.json: {Name:mk49657fba0a0e3293097f9bbbd8574691cb2471 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 03:33:43.195036    1555 start.go:365] acquiring machines lock for addons-183000: {Name:mk918d99818e2cf21e5912bc291ed18d4b442ba6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 03:33:43.195158    1555 start.go:369] acquired machines lock for "addons-183000" in 116.458µs
	I0925 03:33:43.195167    1555 start.go:93] Provisioning new machine with config: &{Name:addons-183000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.2 ClusterName:addons-183000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 03:33:43.195202    1555 start.go:125] createHost starting for "" (driver="qemu2")
	I0925 03:33:43.203570    1555 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0925 03:33:43.526310    1555 start.go:159] libmachine.API.Create for "addons-183000" (driver="qemu2")
	I0925 03:33:43.526360    1555 client.go:168] LocalClient.Create starting
	I0925 03:33:43.526524    1555 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/ca.pem
	I0925 03:33:43.685162    1555 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/cert.pem
	I0925 03:33:43.725069    1555 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17297-1010/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I0925 03:33:44.270899    1555 main.go:141] libmachine: Creating SSH key...
	I0925 03:33:44.356373    1555 main.go:141] libmachine: Creating Disk image...
	I0925 03:33:44.356381    1555 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0925 03:33:44.356565    1555 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/disk.qcow2.raw /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/disk.qcow2
	I0925 03:33:44.389562    1555 main.go:141] libmachine: STDOUT: 
	I0925 03:33:44.389584    1555 main.go:141] libmachine: STDERR: 
	I0925 03:33:44.389658    1555 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/disk.qcow2 +20000M
	I0925 03:33:44.397120    1555 main.go:141] libmachine: STDOUT: Image resized.
	
	I0925 03:33:44.397139    1555 main.go:141] libmachine: STDERR: 
	I0925 03:33:44.397152    1555 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/disk.qcow2
	I0925 03:33:44.397157    1555 main.go:141] libmachine: Starting QEMU VM...
	I0925 03:33:44.397194    1555 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:70:b3:50:3d:bc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/disk.qcow2
	I0925 03:33:44.464471    1555 main.go:141] libmachine: STDOUT: 
	I0925 03:33:44.464499    1555 main.go:141] libmachine: STDERR: 
	I0925 03:33:44.464503    1555 main.go:141] libmachine: Attempt 0
	I0925 03:33:44.464522    1555 main.go:141] libmachine: Searching for 4e:70:b3:50:3d:bc in /var/db/dhcpd_leases ...
	I0925 03:33:46.465678    1555 main.go:141] libmachine: Attempt 1
	I0925 03:33:46.465761    1555 main.go:141] libmachine: Searching for 4e:70:b3:50:3d:bc in /var/db/dhcpd_leases ...
	I0925 03:33:48.467021    1555 main.go:141] libmachine: Attempt 2
	I0925 03:33:48.467061    1555 main.go:141] libmachine: Searching for 4e:70:b3:50:3d:bc in /var/db/dhcpd_leases ...
	I0925 03:33:50.468194    1555 main.go:141] libmachine: Attempt 3
	I0925 03:33:50.468212    1555 main.go:141] libmachine: Searching for 4e:70:b3:50:3d:bc in /var/db/dhcpd_leases ...
	I0925 03:33:52.469241    1555 main.go:141] libmachine: Attempt 4
	I0925 03:33:52.469258    1555 main.go:141] libmachine: Searching for 4e:70:b3:50:3d:bc in /var/db/dhcpd_leases ...
	I0925 03:33:54.470316    1555 main.go:141] libmachine: Attempt 5
	I0925 03:33:54.470352    1555 main.go:141] libmachine: Searching for 4e:70:b3:50:3d:bc in /var/db/dhcpd_leases ...
	I0925 03:33:56.471428    1555 main.go:141] libmachine: Attempt 6
	I0925 03:33:56.471461    1555 main.go:141] libmachine: Searching for 4e:70:b3:50:3d:bc in /var/db/dhcpd_leases ...
	I0925 03:33:56.471625    1555 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0925 03:33:56.471679    1555 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:4e:70:b3:50:3d:bc ID:1,4e:70:b3:50:3d:bc Lease:0x6512b393}
	I0925 03:33:56.471685    1555 main.go:141] libmachine: Found match: 4e:70:b3:50:3d:bc
	I0925 03:33:56.471705    1555 main.go:141] libmachine: IP: 192.168.105.2
	I0925 03:33:56.471714    1555 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.2)...
	I0925 03:33:57.476002    1555 machine.go:88] provisioning docker machine ...
	I0925 03:33:57.476029    1555 buildroot.go:166] provisioning hostname "addons-183000"
	I0925 03:33:57.476399    1555 main.go:141] libmachine: Using SSH client type: native
	I0925 03:33:57.476656    1555 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100de8760] 0x100deaed0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0925 03:33:57.476663    1555 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-183000 && echo "addons-183000" | sudo tee /etc/hostname
	I0925 03:33:57.549226    1555 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-183000
	
	I0925 03:33:57.549294    1555 main.go:141] libmachine: Using SSH client type: native
	I0925 03:33:57.549565    1555 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100de8760] 0x100deaed0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0925 03:33:57.549580    1555 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-183000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-183000/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-183000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0925 03:33:57.619664    1555 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0925 03:33:57.619678    1555 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17297-1010/.minikube CaCertPath:/Users/jenkins/minikube-integration/17297-1010/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17297-1010/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17297-1010/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17297-1010/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17297-1010/.minikube}
	I0925 03:33:57.619692    1555 buildroot.go:174] setting up certificates
	I0925 03:33:57.619698    1555 provision.go:83] configureAuth start
	I0925 03:33:57.619702    1555 provision.go:138] copyHostCerts
	I0925 03:33:57.619800    1555 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17297-1010/.minikube/key.pem (1679 bytes)
	I0925 03:33:57.620015    1555 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17297-1010/.minikube/ca.pem (1082 bytes)
	I0925 03:33:57.620106    1555 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17297-1010/.minikube/cert.pem (1123 bytes)
	I0925 03:33:57.620180    1555 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17297-1010/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17297-1010/.minikube/certs/ca-key.pem org=jenkins.addons-183000 san=[192.168.105.2 192.168.105.2 localhost 127.0.0.1 minikube addons-183000]
	I0925 03:33:57.680529    1555 provision.go:172] copyRemoteCerts
	I0925 03:33:57.680584    1555 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0925 03:33:57.680600    1555 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/id_rsa Username:docker}
	I0925 03:33:57.716693    1555 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0925 03:33:57.724070    1555 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0925 03:33:57.731348    1555 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0925 03:33:57.738044    1555 provision.go:86] duration metric: configureAuth took 118.340875ms
	I0925 03:33:57.738067    1555 buildroot.go:189] setting minikube options for container-runtime
	I0925 03:33:57.738181    1555 config.go:182] Loaded profile config "addons-183000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0925 03:33:57.738225    1555 main.go:141] libmachine: Using SSH client type: native
	I0925 03:33:57.738442    1555 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100de8760] 0x100deaed0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0925 03:33:57.738446    1555 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0925 03:33:57.806528    1555 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0925 03:33:57.806536    1555 buildroot.go:70] root file system type: tmpfs
	I0925 03:33:57.806591    1555 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0925 03:33:57.806639    1555 main.go:141] libmachine: Using SSH client type: native
	I0925 03:33:57.806901    1555 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100de8760] 0x100deaed0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0925 03:33:57.806939    1555 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0925 03:33:57.879305    1555 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0925 03:33:57.879349    1555 main.go:141] libmachine: Using SSH client type: native
	I0925 03:33:57.879600    1555 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100de8760] 0x100deaed0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0925 03:33:57.879612    1555 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0925 03:33:58.218156    1555 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0925 03:33:58.218176    1555 machine.go:91] provisioned docker machine in 742.178459ms
	I0925 03:33:58.218184    1555 client.go:171] LocalClient.Create took 14.692090292s
	I0925 03:33:58.218196    1555 start.go:167] duration metric: libmachine.API.Create for "addons-183000" took 14.692162542s
	I0925 03:33:58.218201    1555 start.go:300] post-start starting for "addons-183000" (driver="qemu2")
	I0925 03:33:58.218213    1555 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0925 03:33:58.218288    1555 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0925 03:33:58.218298    1555 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/id_rsa Username:docker}
	I0925 03:33:58.255037    1555 ssh_runner.go:195] Run: cat /etc/os-release
	I0925 03:33:58.256454    1555 info.go:137] Remote host: Buildroot 2021.02.12
	I0925 03:33:58.256461    1555 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17297-1010/.minikube/addons for local assets ...
	I0925 03:33:58.256533    1555 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17297-1010/.minikube/files for local assets ...
	I0925 03:33:58.256562    1555 start.go:303] post-start completed in 38.354459ms
	I0925 03:33:58.256920    1555 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/config.json ...
	I0925 03:33:58.257077    1555 start.go:128] duration metric: createHost completed in 15.062148875s
	I0925 03:33:58.257104    1555 main.go:141] libmachine: Using SSH client type: native
	I0925 03:33:58.257337    1555 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100de8760] 0x100deaed0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0925 03:33:58.257341    1555 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0925 03:33:58.325173    1555 main.go:141] libmachine: SSH cmd err, output: <nil>: 1695638038.462407626
	
	I0925 03:33:58.325184    1555 fix.go:206] guest clock: 1695638038.462407626
	I0925 03:33:58.325188    1555 fix.go:219] Guest: 2023-09-25 03:33:58.462407626 -0700 PDT Remote: 2023-09-25 03:33:58.257082 -0700 PDT m=+15.162425626 (delta=205.325626ms)
	I0925 03:33:58.325199    1555 fix.go:190] guest clock delta is within tolerance: 205.325626ms
	I0925 03:33:58.325201    1555 start.go:83] releasing machines lock for "addons-183000", held for 15.130317917s
	I0925 03:33:58.325486    1555 ssh_runner.go:195] Run: cat /version.json
	I0925 03:33:58.325494    1555 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/id_rsa Username:docker}
	I0925 03:33:58.325516    1555 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0925 03:33:58.325555    1555 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/id_rsa Username:docker}
	I0925 03:33:58.361340    1555 ssh_runner.go:195] Run: systemctl --version
	I0925 03:33:58.402839    1555 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0925 03:33:58.404630    1555 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0925 03:33:58.404664    1555 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0925 03:33:58.409389    1555 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0925 03:33:58.409398    1555 start.go:469] detecting cgroup driver to use...
	I0925 03:33:58.409504    1555 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0925 03:33:58.414731    1555 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0925 03:33:58.417759    1555 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0925 03:33:58.420882    1555 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0925 03:33:58.420905    1555 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0925 03:33:58.424376    1555 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0925 03:33:58.427971    1555 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0925 03:33:58.431438    1555 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0925 03:33:58.434555    1555 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0925 03:33:58.437481    1555 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0925 03:33:58.440650    1555 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0925 03:33:58.444117    1555 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0925 03:33:58.446963    1555 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0925 03:33:58.506828    1555 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0925 03:33:58.515326    1555 start.go:469] detecting cgroup driver to use...
	I0925 03:33:58.515396    1555 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0925 03:33:58.520350    1555 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0925 03:33:58.525290    1555 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0925 03:33:58.532641    1555 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0925 03:33:58.537661    1555 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0925 03:33:58.542291    1555 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0925 03:33:58.583433    1555 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0925 03:33:58.588627    1555 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0925 03:33:58.594011    1555 ssh_runner.go:195] Run: which cri-dockerd
	I0925 03:33:58.595317    1555 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0925 03:33:58.597772    1555 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0925 03:33:58.602614    1555 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0925 03:33:58.687592    1555 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0925 03:33:58.763371    1555 docker.go:554] configuring docker to use "cgroupfs" as cgroup driver...
	I0925 03:33:58.763431    1555 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0925 03:33:58.768807    1555 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0925 03:33:58.850856    1555 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0925 03:34:00.021109    1555 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.170257708s)
	I0925 03:34:00.021184    1555 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0925 03:34:00.102397    1555 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0925 03:34:00.182389    1555 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0925 03:34:00.242288    1555 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0925 03:34:00.310048    1555 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0925 03:34:00.320927    1555 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0925 03:34:00.397773    1555 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0925 03:34:00.421934    1555 start.go:516] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0925 03:34:00.422022    1555 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0925 03:34:00.424107    1555 start.go:537] Will wait 60s for crictl version
	I0925 03:34:00.424134    1555 ssh_runner.go:195] Run: which crictl
	I0925 03:34:00.425400    1555 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0925 03:34:00.448268    1555 start.go:553] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.6
	RuntimeApiVersion:  v1
	I0925 03:34:00.448328    1555 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0925 03:34:00.458640    1555 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0925 03:34:00.474285    1555 out.go:204] * Preparing Kubernetes v1.28.2 on Docker 24.0.6 ...
	I0925 03:34:00.474362    1555 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0925 03:34:00.475766    1555 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0925 03:34:00.479918    1555 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0925 03:34:00.479959    1555 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0925 03:34:00.485137    1555 docker.go:664] Got preloaded images: 
	I0925 03:34:00.485144    1555 docker.go:670] registry.k8s.io/kube-apiserver:v1.28.2 wasn't preloaded
	I0925 03:34:00.485184    1555 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0925 03:34:00.488328    1555 ssh_runner.go:195] Run: which lz4
	I0925 03:34:00.489753    1555 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0925 03:34:00.490946    1555 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0925 03:34:00.490958    1555 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (356993689 bytes)
	I0925 03:34:01.821604    1555 docker.go:628] Took 1.331913 seconds to copy over tarball
	I0925 03:34:01.821663    1555 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0925 03:34:02.850635    1555 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.028977875s)
	I0925 03:34:02.850646    1555 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0925 03:34:02.866214    1555 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0925 03:34:02.869216    1555 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0925 03:34:02.874196    1555 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0925 03:34:02.955148    1555 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0925 03:34:05.167252    1555 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.212127209s)
	I0925 03:34:05.167356    1555 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0925 03:34:05.173293    1555 docker.go:664] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.2
	registry.k8s.io/kube-scheduler:v1.28.2
	registry.k8s.io/kube-proxy:v1.28.2
	registry.k8s.io/kube-controller-manager:v1.28.2
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0925 03:34:05.173304    1555 cache_images.go:84] Images are preloaded, skipping loading
	I0925 03:34:05.173372    1555 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0925 03:34:05.180961    1555 cni.go:84] Creating CNI manager for ""
	I0925 03:34:05.180975    1555 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0925 03:34:05.180995    1555 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0925 03:34:05.181006    1555 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.2 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-183000 NodeName:addons-183000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0925 03:34:05.181071    1555 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-183000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0925 03:34:05.181111    1555 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=addons-183000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:addons-183000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0925 03:34:05.181162    1555 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I0925 03:34:05.184441    1555 binaries.go:44] Found k8s binaries, skipping transfer
	I0925 03:34:05.184477    1555 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0925 03:34:05.187654    1555 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I0925 03:34:05.192980    1555 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0925 03:34:05.197983    1555 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2099 bytes)
	I0925 03:34:05.202799    1555 ssh_runner.go:195] Run: grep 192.168.105.2	control-plane.minikube.internal$ /etc/hosts
	I0925 03:34:05.204148    1555 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0925 03:34:05.208295    1555 certs.go:56] Setting up /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000 for IP: 192.168.105.2
	I0925 03:34:05.208303    1555 certs.go:190] acquiring lock for shared ca certs: {Name:mk095b03680bcdeba6c321a9f458c9fbafa67639 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 03:34:05.208463    1555 certs.go:204] generating minikubeCA CA: /Users/jenkins/minikube-integration/17297-1010/.minikube/ca.key
	I0925 03:34:05.279404    1555 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17297-1010/.minikube/ca.crt ...
	I0925 03:34:05.279413    1555 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17297-1010/.minikube/ca.crt: {Name:mk70f9fc8ba800117a8a8b4d751d3a98c619cb54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 03:34:05.279591    1555 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17297-1010/.minikube/ca.key ...
	I0925 03:34:05.279595    1555 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17297-1010/.minikube/ca.key: {Name:mkd44aa01a2f3e5b978643c9a3feb1028c2bb791 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 03:34:05.279712    1555 certs.go:204] generating proxyClientCA CA: /Users/jenkins/minikube-integration/17297-1010/.minikube/proxy-client-ca.key
	I0925 03:34:05.342350    1555 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17297-1010/.minikube/proxy-client-ca.crt ...
	I0925 03:34:05.342356    1555 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17297-1010/.minikube/proxy-client-ca.crt: {Name:mkc0af119bea050a868312bfe8f89d742604990c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 03:34:05.342558    1555 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17297-1010/.minikube/proxy-client-ca.key ...
	I0925 03:34:05.342563    1555 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17297-1010/.minikube/proxy-client-ca.key: {Name:mka9b8c6393173e2358c8b84eb9bff6ea6851f33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 03:34:05.342694    1555 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/client.key
	I0925 03:34:05.342700    1555 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/client.crt with IP's: []
	I0925 03:34:05.380999    1555 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/client.crt ...
	I0925 03:34:05.381013    1555 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/client.crt: {Name:mkec4b98dbbfb657baac4f5fae18fe43bd8b5970 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 03:34:05.381125    1555 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/client.key ...
	I0925 03:34:05.381130    1555 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/client.key: {Name:mk8be81ea1673fa1894559e8faa2fa2323674614 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 03:34:05.381227    1555 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/apiserver.key.96055969
	I0925 03:34:05.381235    1555 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/apiserver.crt.96055969 with IP's: [192.168.105.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0925 03:34:05.441721    1555 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/apiserver.crt.96055969 ...
	I0925 03:34:05.441725    1555 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/apiserver.crt.96055969: {Name:mkba38dc1a56241112b86d1503bca4f2588c1bf7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 03:34:05.441849    1555 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/apiserver.key.96055969 ...
	I0925 03:34:05.441852    1555 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/apiserver.key.96055969: {Name:mk41423e9550dcb3371da4467db52078d1bb4d78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 03:34:05.441956    1555 certs.go:337] copying /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/apiserver.crt.96055969 -> /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/apiserver.crt
	I0925 03:34:05.442053    1555 certs.go:341] copying /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/apiserver.key.96055969 -> /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/apiserver.key
	I0925 03:34:05.442146    1555 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/proxy-client.key
	I0925 03:34:05.442154    1555 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/proxy-client.crt with IP's: []
	I0925 03:34:05.578079    1555 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/proxy-client.crt ...
	I0925 03:34:05.578082    1555 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/proxy-client.crt: {Name:mkbd132fd7a0f2cb28d572f95bd43c9a1ef215f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 03:34:05.578216    1555 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/proxy-client.key ...
	I0925 03:34:05.578218    1555 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/proxy-client.key: {Name:mkf93f480df65e887c0e782806fe1d821d05370d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 03:34:05.578436    1555 certs.go:437] found cert: /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/Users/jenkins/minikube-integration/17297-1010/.minikube/certs/ca-key.pem (1675 bytes)
	I0925 03:34:05.578458    1555 certs.go:437] found cert: /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/Users/jenkins/minikube-integration/17297-1010/.minikube/certs/ca.pem (1082 bytes)
	I0925 03:34:05.578479    1555 certs.go:437] found cert: /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/Users/jenkins/minikube-integration/17297-1010/.minikube/certs/cert.pem (1123 bytes)
	I0925 03:34:05.578499    1555 certs.go:437] found cert: /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/Users/jenkins/minikube-integration/17297-1010/.minikube/certs/key.pem (1679 bytes)
	I0925 03:34:05.578876    1555 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0925 03:34:05.587435    1555 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0925 03:34:05.594545    1555 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0925 03:34:05.601433    1555 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0925 03:34:05.608504    1555 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1010/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0925 03:34:05.616247    1555 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1010/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0925 03:34:05.623555    1555 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1010/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0925 03:34:05.630877    1555 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1010/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0925 03:34:05.637827    1555 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1010/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0925 03:34:05.644421    1555 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0925 03:34:05.650432    1555 ssh_runner.go:195] Run: openssl version
	I0925 03:34:05.652383    1555 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0925 03:34:05.655860    1555 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0925 03:34:05.657450    1555 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 25 10:34 /usr/share/ca-certificates/minikubeCA.pem
	I0925 03:34:05.657472    1555 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0925 03:34:05.659354    1555 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0925 03:34:05.662355    1555 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0925 03:34:05.663775    1555 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0925 03:34:05.663811    1555 kubeadm.go:404] StartCluster: {Name:addons-183000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.2 ClusterName:addons-183000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0925 03:34:05.663875    1555 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0925 03:34:05.669363    1555 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0925 03:34:05.672641    1555 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0925 03:34:05.675788    1555 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0925 03:34:05.678955    1555 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0925 03:34:05.678977    1555 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0925 03:34:05.700129    1555 kubeadm.go:322] [init] Using Kubernetes version: v1.28.2
	I0925 03:34:05.700165    1555 kubeadm.go:322] [preflight] Running pre-flight checks
	I0925 03:34:05.762507    1555 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0925 03:34:05.762580    1555 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0925 03:34:05.762631    1555 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0925 03:34:05.856523    1555 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0925 03:34:05.862696    1555 out.go:204]   - Generating certificates and keys ...
	I0925 03:34:05.862744    1555 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0925 03:34:05.862781    1555 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0925 03:34:05.954799    1555 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0925 03:34:06.088347    1555 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0925 03:34:06.179074    1555 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0925 03:34:06.367263    1555 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0925 03:34:06.441263    1555 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0925 03:34:06.441326    1555 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-183000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0925 03:34:06.679555    1555 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0925 03:34:06.679622    1555 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-183000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0925 03:34:06.780717    1555 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0925 03:34:06.934557    1555 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0925 03:34:07.004571    1555 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0925 03:34:07.004599    1555 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0925 03:34:07.096444    1555 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0925 03:34:07.197087    1555 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0925 03:34:07.295019    1555 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0925 03:34:07.459088    1555 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0925 03:34:07.459841    1555 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0925 03:34:07.461016    1555 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0925 03:34:07.464311    1555 out.go:204]   - Booting up control plane ...
	I0925 03:34:07.464429    1555 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0925 03:34:07.464523    1555 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0925 03:34:07.464562    1555 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0925 03:34:07.468573    1555 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0925 03:34:07.468914    1555 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0925 03:34:07.468980    1555 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0925 03:34:07.551081    1555 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0925 03:34:11.552205    1555 kubeadm.go:322] [apiclient] All control plane components are healthy after 4.001307 seconds
	I0925 03:34:11.552277    1555 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0925 03:34:11.558090    1555 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0925 03:34:12.066492    1555 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0925 03:34:12.066604    1555 kubeadm.go:322] [mark-control-plane] Marking the node addons-183000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0925 03:34:12.571455    1555 kubeadm.go:322] [bootstrap-token] Using token: dcud0i.8u8422zl7jahtpxe
	I0925 03:34:12.577836    1555 out.go:204]   - Configuring RBAC rules ...
	I0925 03:34:12.577916    1555 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0925 03:34:12.580042    1555 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0925 03:34:12.583046    1555 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0925 03:34:12.584193    1555 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0925 03:34:12.585457    1555 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0925 03:34:12.586636    1555 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0925 03:34:12.592832    1555 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0925 03:34:12.757427    1555 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0925 03:34:12.982058    1555 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0925 03:34:12.982629    1555 kubeadm.go:322] 
	I0925 03:34:12.982664    1555 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0925 03:34:12.982667    1555 kubeadm.go:322] 
	I0925 03:34:12.982715    1555 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0925 03:34:12.982721    1555 kubeadm.go:322] 
	I0925 03:34:12.982735    1555 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0925 03:34:12.982762    1555 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0925 03:34:12.982824    1555 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0925 03:34:12.982828    1555 kubeadm.go:322] 
	I0925 03:34:12.982852    1555 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0925 03:34:12.982856    1555 kubeadm.go:322] 
	I0925 03:34:12.982895    1555 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0925 03:34:12.982898    1555 kubeadm.go:322] 
	I0925 03:34:12.982927    1555 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0925 03:34:12.982998    1555 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0925 03:34:12.983041    1555 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0925 03:34:12.983046    1555 kubeadm.go:322] 
	I0925 03:34:12.983087    1555 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0925 03:34:12.983123    1555 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0925 03:34:12.983125    1555 kubeadm.go:322] 
	I0925 03:34:12.983172    1555 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token dcud0i.8u8422zl7jahtpxe \
	I0925 03:34:12.983225    1555 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:3fc5fb926713648f8638ba10da0d4f45584d32929bcc07af5ada491c000ad47e \
	I0925 03:34:12.983240    1555 kubeadm.go:322] 	--control-plane 
	I0925 03:34:12.983242    1555 kubeadm.go:322] 
	I0925 03:34:12.983281    1555 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0925 03:34:12.983285    1555 kubeadm.go:322] 
	I0925 03:34:12.983328    1555 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token dcud0i.8u8422zl7jahtpxe \
	I0925 03:34:12.983387    1555 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:3fc5fb926713648f8638ba10da0d4f45584d32929bcc07af5ada491c000ad47e 
	I0925 03:34:12.983463    1555 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0925 03:34:12.983472    1555 cni.go:84] Creating CNI manager for ""
	I0925 03:34:12.983479    1555 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0925 03:34:12.992098    1555 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0925 03:34:12.995235    1555 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0925 03:34:12.999700    1555 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0925 03:34:13.004656    1555 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0925 03:34:13.004755    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=1bf6c3d5317028f348e55ea19d261973a6487d3c minikube.k8s.io/name=addons-183000 minikube.k8s.io/updated_at=2023_09_25T03_34_13_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:13.004757    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:13.008164    1555 ops.go:34] apiserver oom_adj: -16
	I0925 03:34:13.063625    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:13.095139    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:13.629666    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:14.129649    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:14.629662    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:15.129655    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:15.629628    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:16.129723    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:16.629660    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:17.129683    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:17.629643    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:18.129619    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:18.629638    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:19.129594    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:19.629589    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:20.129625    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:20.629540    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:21.129598    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:21.629573    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:22.129550    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:22.629493    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:23.129517    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:23.629511    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:24.129464    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:24.629448    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:25.129565    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:25.629529    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:26.129496    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:26.629436    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:26.667079    1555 kubeadm.go:1081] duration metric: took 13.662618083s to wait for elevateKubeSystemPrivileges.
	I0925 03:34:26.667097    1555 kubeadm.go:406] StartCluster complete in 21.003673917s
	I0925 03:34:26.667106    1555 settings.go:142] acquiring lock: {Name:mkb5a0822179f07ef9369c44aa9b64eb9ef74eed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 03:34:26.667266    1555 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/17297-1010/kubeconfig
	I0925 03:34:26.667431    1555 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17297-1010/kubeconfig: {Name:mkaa9d09ca2bf27c1a43efc9acf938adcc68343d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 03:34:26.667677    1555 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0925 03:34:26.667722    1555 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:true]
	I0925 03:34:26.667779    1555 addons.go:69] Setting volumesnapshots=true in profile "addons-183000"
	I0925 03:34:26.667782    1555 addons.go:69] Setting cloud-spanner=true in profile "addons-183000"
	I0925 03:34:26.667785    1555 addons.go:231] Setting addon volumesnapshots=true in "addons-183000"
	I0925 03:34:26.667789    1555 addons.go:231] Setting addon cloud-spanner=true in "addons-183000"
	I0925 03:34:26.667790    1555 addons.go:69] Setting default-storageclass=true in profile "addons-183000"
	I0925 03:34:26.667799    1555 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-183000"
	I0925 03:34:26.667820    1555 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-183000"
	I0925 03:34:26.667848    1555 host.go:66] Checking if "addons-183000" exists ...
	I0925 03:34:26.667850    1555 addons.go:69] Setting registry=true in profile "addons-183000"
	I0925 03:34:26.667858    1555 addons.go:231] Setting addon registry=true in "addons-183000"
	I0925 03:34:26.667849    1555 addons.go:69] Setting metrics-server=true in profile "addons-183000"
	I0925 03:34:26.667873    1555 addons.go:231] Setting addon metrics-server=true in "addons-183000"
	I0925 03:34:26.667880    1555 host.go:66] Checking if "addons-183000" exists ...
	I0925 03:34:26.667881    1555 addons.go:69] Setting gcp-auth=true in profile "addons-183000"
	I0925 03:34:26.667902    1555 mustload.go:65] Loading cluster: addons-183000
	I0925 03:34:26.667915    1555 host.go:66] Checking if "addons-183000" exists ...
	I0925 03:34:26.667948    1555 host.go:66] Checking if "addons-183000" exists ...
	I0925 03:34:26.667977    1555 config.go:182] Loaded profile config "addons-183000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0925 03:34:26.668033    1555 addons.go:69] Setting ingress-dns=true in profile "addons-183000"
	I0925 03:34:26.668035    1555 addons.go:69] Setting inspektor-gadget=true in profile "addons-183000"
	I0925 03:34:26.668042    1555 addons.go:69] Setting storage-provisioner=true in profile "addons-183000"
	I0925 03:34:26.668047    1555 addons.go:231] Setting addon storage-provisioner=true in "addons-183000"
	I0925 03:34:26.668049    1555 addons.go:231] Setting addon inspektor-gadget=true in "addons-183000"
	I0925 03:34:26.668059    1555 host.go:66] Checking if "addons-183000" exists ...
	I0925 03:34:26.668076    1555 host.go:66] Checking if "addons-183000" exists ...
	W0925 03:34:26.668189    1555 host.go:54] host status for "addons-183000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/monitor: connect: connection refused
	W0925 03:34:26.668197    1555 addons.go:277] "addons-183000" is not running, setting volumesnapshots=true and skipping enablement (err=<nil>)
	I0925 03:34:26.667780    1555 addons.go:69] Setting ingress=true in profile "addons-183000"
	I0925 03:34:26.668202    1555 addons.go:231] Setting addon ingress=true in "addons-183000"
	I0925 03:34:26.668215    1555 host.go:66] Checking if "addons-183000" exists ...
	W0925 03:34:26.668271    1555 host.go:54] host status for "addons-183000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/monitor: connect: connection refused
	W0925 03:34:26.668277    1555 addons.go:277] "addons-183000" is not running, setting storage-provisioner=true and skipping enablement (err=<nil>)
	I0925 03:34:26.668038    1555 addons.go:231] Setting addon ingress-dns=true in "addons-183000"
	I0925 03:34:26.668289    1555 host.go:66] Checking if "addons-183000" exists ...
	I0925 03:34:26.667873    1555 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-183000"
	I0925 03:34:26.668351    1555 host.go:66] Checking if "addons-183000" exists ...
	W0925 03:34:26.668420    1555 host.go:54] host status for "addons-183000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/monitor: connect: connection refused
	W0925 03:34:26.668426    1555 addons.go:277] "addons-183000" is not running, setting ingress=true and skipping enablement (err=<nil>)
	I0925 03:34:26.668428    1555 addons.go:467] Verifying addon ingress=true in "addons-183000"
	I0925 03:34:26.671815    1555 out.go:177] * Verifying ingress addon...
	I0925 03:34:26.668077    1555 config.go:182] Loaded profile config "addons-183000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	W0925 03:34:26.668443    1555 host.go:54] host status for "addons-183000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/monitor: connect: connection refused
	W0925 03:34:26.668492    1555 host.go:54] host status for "addons-183000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/monitor: connect: connection refused
	W0925 03:34:26.668560    1555 host.go:54] host status for "addons-183000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/monitor: connect: connection refused
	W0925 03:34:26.668562    1555 host.go:54] host status for "addons-183000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/monitor: connect: connection refused
	W0925 03:34:26.668565    1555 host.go:54] host status for "addons-183000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/monitor: connect: connection refused
	I0925 03:34:26.674660    1555 addons.go:231] Setting addon default-storageclass=true in "addons-183000"
	W0925 03:34:26.679882    1555 addons.go:277] "addons-183000" is not running, setting ingress-dns=true and skipping enablement (err=<nil>)
	W0925 03:34:26.679903    1555 addons.go:277] "addons-183000" is not running, setting csi-hostpath-driver=true and skipping enablement (err=<nil>)
	W0925 03:34:26.679909    1555 addons.go:277] "addons-183000" is not running, setting cloud-spanner=true and skipping enablement (err=<nil>)
	W0925 03:34:26.679910    1555 addons.go:277] "addons-183000" is not running, setting metrics-server=true and skipping enablement (err=<nil>)
	W0925 03:34:26.679914    1555 addons.go:277] "addons-183000" is not running, setting registry=true and skipping enablement (err=<nil>)
	I0925 03:34:26.680408    1555 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0925 03:34:26.680587    1555 host.go:66] Checking if "addons-183000" exists ...
	I0925 03:34:26.685884    1555 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-183000"
	I0925 03:34:26.691851    1555 out.go:177] * Verifying csi-hostpath-driver addon...
	I0925 03:34:26.685950    1555 addons.go:467] Verifying addon metrics-server=true in "addons-183000"
	I0925 03:34:26.685956    1555 addons.go:467] Verifying addon registry=true in "addons-183000"
	I0925 03:34:26.685976    1555 host.go:66] Checking if "addons-183000" exists ...
	I0925 03:34:26.685980    1555 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.20.0
	I0925 03:34:26.693878    1555 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0925 03:34:26.696802    1555 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-183000" context rescaled to 1 replicas
	I0925 03:34:26.698859    1555 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 03:34:26.700109    1555 out.go:177] * Verifying Kubernetes components...
	I0925 03:34:26.699453    1555 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0925 03:34:26.699742    1555 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0925 03:34:26.709918    1555 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0925 03:34:26.713912    1555 out.go:177] * Verifying registry addon...
	I0925 03:34:26.717867    1555 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0925 03:34:26.720802    1555 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/id_rsa Username:docker}
	I0925 03:34:26.717891    1555 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0925 03:34:26.720819    1555 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0925 03:34:26.720825    1555 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/id_rsa Username:docker}
	I0925 03:34:26.721266    1555 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0925 03:34:26.726699    1555 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=registry
	I0925 03:34:26.728776    1555 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0925 03:34:26.751434    1555 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0925 03:34:26.751798    1555 node_ready.go:35] waiting up to 6m0s for node "addons-183000" to be "Ready" ...
	I0925 03:34:26.753298    1555 node_ready.go:49] node "addons-183000" has status "Ready":"True"
	I0925 03:34:26.753320    1555 node_ready.go:38] duration metric: took 1.500542ms waiting for node "addons-183000" to be "Ready" ...
	I0925 03:34:26.753326    1555 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0925 03:34:26.756603    1555 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-nj9v5" in "kube-system" namespace to be "Ready" ...
	I0925 03:34:26.894346    1555 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0925 03:34:26.894357    1555 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0925 03:34:26.894362    1555 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0925 03:34:26.913613    1555 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I0925 03:34:26.913623    1555 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0925 03:34:26.955544    1555 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0925 03:34:26.955558    1555 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0925 03:34:26.966254    1555 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0925 03:34:26.966263    1555 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0925 03:34:26.970978    1555 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0925 03:34:26.970984    1555 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0925 03:34:26.980045    1555 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I0925 03:34:26.980056    1555 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0925 03:34:27.011877    1555 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0925 03:34:27.011886    1555 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7741 bytes)
	I0925 03:34:27.035496    1555 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0925 03:34:27.284243    1555 start.go:923] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I0925 03:34:28.770683    1555 pod_ready.go:102] pod "coredns-5dd5756b68-nj9v5" in "kube-system" namespace has status "Ready":"False"
	I0925 03:34:30.771066    1555 pod_ready.go:102] pod "coredns-5dd5756b68-nj9v5" in "kube-system" namespace has status "Ready":"False"
	I0925 03:34:33.271406    1555 pod_ready.go:102] pod "coredns-5dd5756b68-nj9v5" in "kube-system" namespace has status "Ready":"False"
	I0925 03:34:33.290034    1555 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0925 03:34:33.290047    1555 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/id_rsa Username:docker}
	I0925 03:34:33.333376    1555 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0925 03:34:33.340520    1555 addons.go:231] Setting addon gcp-auth=true in "addons-183000"
	I0925 03:34:33.340540    1555 host.go:66] Checking if "addons-183000" exists ...
	I0925 03:34:33.341291    1555 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0925 03:34:33.341299    1555 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/id_rsa Username:docker}
	I0925 03:34:33.385047    1555 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0925 03:34:33.390017    1555 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0925 03:34:33.393078    1555 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0925 03:34:33.393083    1555 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0925 03:34:33.401443    1555 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0925 03:34:33.401449    1555 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0925 03:34:33.408814    1555 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0925 03:34:33.408821    1555 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5412 bytes)
	I0925 03:34:33.415868    1555 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0925 03:34:33.956480    1555 addons.go:467] Verifying addon gcp-auth=true in "addons-183000"
	I0925 03:34:33.962940    1555 out.go:177] * Verifying gcp-auth addon...
	I0925 03:34:33.970267    1555 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0925 03:34:33.972814    1555 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0925 03:34:33.972821    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:33.975859    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:34.479146    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:34.978976    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:35.477962    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:35.770777    1555 pod_ready.go:102] pod "coredns-5dd5756b68-nj9v5" in "kube-system" namespace has status "Ready":"False"
	I0925 03:34:35.978841    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:36.478564    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:36.978738    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:37.478896    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:37.978838    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:38.273778    1555 pod_ready.go:102] pod "coredns-5dd5756b68-nj9v5" in "kube-system" namespace has status "Ready":"False"
	I0925 03:34:38.478811    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:38.770881    1555 pod_ready.go:92] pod "coredns-5dd5756b68-nj9v5" in "kube-system" namespace has status "Ready":"True"
	I0925 03:34:38.770889    1555 pod_ready.go:81] duration metric: took 12.014493833s waiting for pod "coredns-5dd5756b68-nj9v5" in "kube-system" namespace to be "Ready" ...
	I0925 03:34:38.770893    1555 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-183000" in "kube-system" namespace to be "Ready" ...
	I0925 03:34:38.773593    1555 pod_ready.go:92] pod "etcd-addons-183000" in "kube-system" namespace has status "Ready":"True"
	I0925 03:34:38.773599    1555 pod_ready.go:81] duration metric: took 2.702459ms waiting for pod "etcd-addons-183000" in "kube-system" namespace to be "Ready" ...
	I0925 03:34:38.773602    1555 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-183000" in "kube-system" namespace to be "Ready" ...
	I0925 03:34:38.775799    1555 pod_ready.go:92] pod "kube-apiserver-addons-183000" in "kube-system" namespace has status "Ready":"True"
	I0925 03:34:38.775804    1555 pod_ready.go:81] duration metric: took 2.198875ms waiting for pod "kube-apiserver-addons-183000" in "kube-system" namespace to be "Ready" ...
	I0925 03:34:38.775808    1555 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-183000" in "kube-system" namespace to be "Ready" ...
	I0925 03:34:38.777922    1555 pod_ready.go:92] pod "kube-controller-manager-addons-183000" in "kube-system" namespace has status "Ready":"True"
	I0925 03:34:38.777929    1555 pod_ready.go:81] duration metric: took 2.118625ms waiting for pod "kube-controller-manager-addons-183000" in "kube-system" namespace to be "Ready" ...
	I0925 03:34:38.777933    1555 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7t7bh" in "kube-system" namespace to be "Ready" ...
	I0925 03:34:38.780129    1555 pod_ready.go:92] pod "kube-proxy-7t7bh" in "kube-system" namespace has status "Ready":"True"
	I0925 03:34:38.780136    1555 pod_ready.go:81] duration metric: took 2.199875ms waiting for pod "kube-proxy-7t7bh" in "kube-system" namespace to be "Ready" ...
	I0925 03:34:38.780139    1555 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-183000" in "kube-system" namespace to be "Ready" ...
	I0925 03:34:38.977389    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:39.173086    1555 pod_ready.go:92] pod "kube-scheduler-addons-183000" in "kube-system" namespace has status "Ready":"True"
	I0925 03:34:39.173096    1555 pod_ready.go:81] duration metric: took 392.960166ms waiting for pod "kube-scheduler-addons-183000" in "kube-system" namespace to be "Ready" ...
	I0925 03:34:39.173100    1555 pod_ready.go:38] duration metric: took 12.419997458s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0925 03:34:39.173111    1555 api_server.go:52] waiting for apiserver process to appear ...
	I0925 03:34:39.173181    1555 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0925 03:34:39.178068    1555 api_server.go:72] duration metric: took 12.479424625s to wait for apiserver process to appear ...
	I0925 03:34:39.178075    1555 api_server.go:88] waiting for apiserver healthz status ...
	I0925 03:34:39.178081    1555 api_server.go:253] Checking apiserver healthz at https://192.168.105.2:8443/healthz ...
	I0925 03:34:39.182471    1555 api_server.go:279] https://192.168.105.2:8443/healthz returned 200:
	ok
	I0925 03:34:39.183204    1555 api_server.go:141] control plane version: v1.28.2
	I0925 03:34:39.183210    1555 api_server.go:131] duration metric: took 5.132042ms to wait for apiserver health ...
	I0925 03:34:39.183213    1555 system_pods.go:43] waiting for kube-system pods to appear ...
	I0925 03:34:39.372354    1555 system_pods.go:59] 6 kube-system pods found
	I0925 03:34:39.372365    1555 system_pods.go:61] "coredns-5dd5756b68-nj9v5" [b1bb0e62-0339-479f-9572-1e07ab015a1d] Running
	I0925 03:34:39.372368    1555 system_pods.go:61] "etcd-addons-183000" [98901ac9-8165-4fad-b6a6-6c757da8e783] Running
	I0925 03:34:39.372371    1555 system_pods.go:61] "kube-apiserver-addons-183000" [b3899bc1-2055-47fb-aded-8cc3e5ca8b22] Running
	I0925 03:34:39.372373    1555 system_pods.go:61] "kube-controller-manager-addons-183000" [12803b97-0e90-4869-a114-2dce351af701] Running
	I0925 03:34:39.372376    1555 system_pods.go:61] "kube-proxy-7t7bh" [b51c70db-a512-4aae-af91-8b45e6ce9f89] Running
	I0925 03:34:39.372378    1555 system_pods.go:61] "kube-scheduler-addons-183000" [543428f6-b6ce-448c-9d3e-48c775396c75] Running
	I0925 03:34:39.372382    1555 system_pods.go:74] duration metric: took 189.166917ms to wait for pod list to return data ...
	I0925 03:34:39.372386    1555 default_sa.go:34] waiting for default service account to be created ...
	I0925 03:34:39.478483    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:39.569942    1555 default_sa.go:45] found service account: "default"
	I0925 03:34:39.569952    1555 default_sa.go:55] duration metric: took 197.566292ms for default service account to be created ...
	I0925 03:34:39.569955    1555 system_pods.go:116] waiting for k8s-apps to be running ...
	I0925 03:34:39.771555    1555 system_pods.go:86] 6 kube-system pods found
	I0925 03:34:39.771566    1555 system_pods.go:89] "coredns-5dd5756b68-nj9v5" [b1bb0e62-0339-479f-9572-1e07ab015a1d] Running
	I0925 03:34:39.771569    1555 system_pods.go:89] "etcd-addons-183000" [98901ac9-8165-4fad-b6a6-6c757da8e783] Running
	I0925 03:34:39.771571    1555 system_pods.go:89] "kube-apiserver-addons-183000" [b3899bc1-2055-47fb-aded-8cc3e5ca8b22] Running
	I0925 03:34:39.771573    1555 system_pods.go:89] "kube-controller-manager-addons-183000" [12803b97-0e90-4869-a114-2dce351af701] Running
	I0925 03:34:39.771576    1555 system_pods.go:89] "kube-proxy-7t7bh" [b51c70db-a512-4aae-af91-8b45e6ce9f89] Running
	I0925 03:34:39.771579    1555 system_pods.go:89] "kube-scheduler-addons-183000" [543428f6-b6ce-448c-9d3e-48c775396c75] Running
	I0925 03:34:39.771582    1555 system_pods.go:126] duration metric: took 201.627792ms to wait for k8s-apps to be running ...
	I0925 03:34:39.771585    1555 system_svc.go:44] waiting for kubelet service to be running ....
	I0925 03:34:39.771649    1555 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0925 03:34:39.777059    1555 system_svc.go:56] duration metric: took 5.471834ms WaitForService to wait for kubelet.
	I0925 03:34:39.777072    1555 kubeadm.go:581] duration metric: took 13.078440792s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0925 03:34:39.777081    1555 node_conditions.go:102] verifying NodePressure condition ...
	I0925 03:34:39.970496    1555 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0925 03:34:39.970507    1555 node_conditions.go:123] node cpu capacity is 2
	I0925 03:34:39.970512    1555 node_conditions.go:105] duration metric: took 193.43225ms to run NodePressure ...
	I0925 03:34:39.970518    1555 start.go:228] waiting for startup goroutines ...
	I0925 03:34:39.977869    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:40.478718    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:40.978494    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:41.478330    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:41.978723    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:42.478484    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:42.978499    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:43.478310    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:43.978560    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:44.478626    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:44.978747    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:45.478652    1555 kapi.go:107] duration metric: took 11.508592542s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0925 03:34:45.482917    1555 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-183000 cluster.
	I0925 03:34:45.486908    1555 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0925 03:34:45.489839    1555 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0925 03:40:26.681420    1555 kapi.go:107] duration metric: took 6m0.007630792s to wait for app.kubernetes.io/name=ingress-nginx ...
	W0925 03:40:26.681519    1555 out.go:239] ! Enabling 'ingress' returned an error: running callbacks: [waiting for app.kubernetes.io/name=ingress-nginx pods: context deadline exceeded]
	I0925 03:40:26.713271    1555 kapi.go:107] duration metric: took 6m0.020443166s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	W0925 03:40:26.713301    1555 out.go:239] ! Enabling 'csi-hostpath-driver' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=csi-hostpath-driver pods: context deadline exceeded]
	I0925 03:40:26.715027    1555 kapi.go:107] duration metric: took 6m0.000386167s to wait for kubernetes.io/minikube-addons=registry ...
	W0925 03:40:26.715058    1555 out.go:239] ! Enabling 'registry' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=registry pods: context deadline exceeded]
	I0925 03:40:26.720408    1555 out.go:177] * Enabled addons: volumesnapshots, storage-provisioner, cloud-spanner, ingress-dns, metrics-server, default-storageclass, inspektor-gadget, gcp-auth
	I0925 03:40:26.729284    1555 addons.go:502] enable addons completed in 6m0.068199458s: enabled=[volumesnapshots storage-provisioner cloud-spanner ingress-dns metrics-server default-storageclass inspektor-gadget gcp-auth]
	I0925 03:40:26.729295    1555 start.go:233] waiting for cluster config update ...
	I0925 03:40:26.729300    1555 start.go:242] writing updated cluster config ...
	I0925 03:40:26.729761    1555 ssh_runner.go:195] Run: rm -f paused
	I0925 03:40:26.760421    1555 start.go:600] kubectl: 1.27.2, cluster: 1.28.2 (minor skew: 1)
	I0925 03:40:26.764251    1555 out.go:177] * Done! kubectl is now configured to use "addons-183000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Mon 2023-09-25 10:33:55 UTC, ends at Mon 2023-09-25 11:04:51 UTC. --
	Sep 25 10:34:42 addons-183000 dockerd[1111]: time="2023-09-25T10:34:42.404325086Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 25 10:34:44 addons-183000 cri-dockerd[998]: time="2023-09-25T10:34:44Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf: Status: Downloaded newer image for gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf"
	Sep 25 10:34:44 addons-183000 dockerd[1111]: time="2023-09-25T10:34:44.321936694Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 25 10:34:44 addons-183000 dockerd[1111]: time="2023-09-25T10:34:44.321971829Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 25 10:34:44 addons-183000 dockerd[1111]: time="2023-09-25T10:34:44.321982528Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 25 10:34:44 addons-183000 dockerd[1111]: time="2023-09-25T10:34:44.321989314Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 25 10:52:28 addons-183000 dockerd[1111]: time="2023-09-25T10:52:28.470595101Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 25 10:52:28 addons-183000 dockerd[1111]: time="2023-09-25T10:52:28.470647350Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 25 10:52:28 addons-183000 dockerd[1111]: time="2023-09-25T10:52:28.470663058Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 25 10:52:28 addons-183000 dockerd[1111]: time="2023-09-25T10:52:28.470673850Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 25 10:52:28 addons-183000 cri-dockerd[998]: time="2023-09-25T10:52:28Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/df99a16ef61333f49304447de1f31c9677e9243b43dae14dfba57e8a2aeeb1be/resolv.conf as [nameserver 10.96.0.10 search headlamp.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Sep 25 10:52:28 addons-183000 dockerd[1105]: time="2023-09-25T10:52:28.813334559Z" level=warning msg="reference for unknown type: " digest="sha256:bb15916c96306cd14f1c9c09c639d01d1d1fb854fd770bf99f3e7a9deb584753" remote="ghcr.io/headlamp-k8s/headlamp@sha256:bb15916c96306cd14f1c9c09c639d01d1d1fb854fd770bf99f3e7a9deb584753"
	Sep 25 10:52:33 addons-183000 cri-dockerd[998]: time="2023-09-25T10:52:33Z" level=info msg="Stop pulling image ghcr.io/headlamp-k8s/headlamp:v0.19.1@sha256:bb15916c96306cd14f1c9c09c639d01d1d1fb854fd770bf99f3e7a9deb584753: Status: Downloaded newer image for ghcr.io/headlamp-k8s/headlamp@sha256:bb15916c96306cd14f1c9c09c639d01d1d1fb854fd770bf99f3e7a9deb584753"
	Sep 25 10:52:34 addons-183000 dockerd[1111]: time="2023-09-25T10:52:34.018791826Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 25 10:52:34 addons-183000 dockerd[1111]: time="2023-09-25T10:52:34.018844700Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 25 10:52:34 addons-183000 dockerd[1111]: time="2023-09-25T10:52:34.018856825Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 25 10:52:34 addons-183000 dockerd[1111]: time="2023-09-25T10:52:34.018863408Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 25 11:04:46 addons-183000 dockerd[1111]: time="2023-09-25T11:04:46.234281937Z" level=info msg="shim disconnected" id=3214d7d3645b319f499cd4e473783f22248ab293d0b7bd09221747894dd5ebed namespace=moby
	Sep 25 11:04:46 addons-183000 dockerd[1111]: time="2023-09-25T11:04:46.234315062Z" level=warning msg="cleaning up after shim disconnected" id=3214d7d3645b319f499cd4e473783f22248ab293d0b7bd09221747894dd5ebed namespace=moby
	Sep 25 11:04:46 addons-183000 dockerd[1111]: time="2023-09-25T11:04:46.234319604Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 25 11:04:46 addons-183000 dockerd[1105]: time="2023-09-25T11:04:46.234527935Z" level=info msg="ignoring event" container=3214d7d3645b319f499cd4e473783f22248ab293d0b7bd09221747894dd5ebed module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 25 11:04:46 addons-183000 dockerd[1105]: time="2023-09-25T11:04:46.264129930Z" level=info msg="ignoring event" container=1f38ec635c03d87bfa52e9a8918af2011a604df8d3e7dc5113f3e662ce6bb608 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 25 11:04:46 addons-183000 dockerd[1111]: time="2023-09-25T11:04:46.264758631Z" level=info msg="shim disconnected" id=1f38ec635c03d87bfa52e9a8918af2011a604df8d3e7dc5113f3e662ce6bb608 namespace=moby
	Sep 25 11:04:46 addons-183000 dockerd[1111]: time="2023-09-25T11:04:46.264787964Z" level=warning msg="cleaning up after shim disconnected" id=1f38ec635c03d87bfa52e9a8918af2011a604df8d3e7dc5113f3e662ce6bb608 namespace=moby
	Sep 25 11:04:46 addons-183000 dockerd[1111]: time="2023-09-25T11:04:46.264792755Z" level=info msg="cleaning up dead shim" namespace=moby
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                          CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d5793dcd01c69       ghcr.io/headlamp-k8s/headlamp@sha256:bb15916c96306cd14f1c9c09c639d01d1d1fb854fd770bf99f3e7a9deb584753          12 minutes ago      Running             headlamp                  0                   df99a16ef6133       headlamp-58b88cff49-kdgv2
	f0ceeef2fd99f       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf   30 minutes ago      Running             gcp-auth                  0                   217fc96b3ae84       gcp-auth-d4c87556c-fgkgk
	09ae8580d310e       97e04611ad434                                                                                                  30 minutes ago      Running             coredns                   0                   9802832060d13       coredns-5dd5756b68-nj9v5
	fff72387d957b       7da62c127fc0f                                                                                                  30 minutes ago      Running             kube-proxy                0                   2514b88f9fbec       kube-proxy-7t7bh
	e24563a552742       89d57b83c1786                                                                                                  30 minutes ago      Running             kube-controller-manager   0                   7170972f2383c       kube-controller-manager-addons-183000
	e38f0c6d58f79       30bb499447fe1                                                                                                  30 minutes ago      Running             kube-apiserver            0                   e3ec8dad501d8       kube-apiserver-addons-183000
	202a7fdac8250       9cdd6470f48c8                                                                                                  30 minutes ago      Running             etcd                      0                   f07db97eda3c5       etcd-addons-183000
	5a87dfcd0e1a4       64fc40cee3716                                                                                                  30 minutes ago      Running             kube-scheduler            0                   88f62df9ef878       kube-scheduler-addons-183000
	
	* 
	* ==> coredns [09ae8580d310] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:53855 - 12762 "HINFO IN 6175233926506353361.1980247959579836404. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.004134462s
	[INFO] 10.244.0.5:53045 - 37584 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000106198s
	[INFO] 10.244.0.5:58309 - 60928 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000170558s
	[INFO] 10.244.0.5:51843 - 23622 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000213104s
	[INFO] 10.244.0.5:42760 - 58990 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000042504s
	[INFO] 10.244.0.5:51340 - 46119 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00004929s
	[INFO] 10.244.0.5:39848 - 8379 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000023105s
	[INFO] 10.244.0.5:32887 - 31577 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001136668s
	[INFO] 10.244.0.5:49269 - 43084 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 458 0.001085546s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-183000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-183000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1bf6c3d5317028f348e55ea19d261973a6487d3c
	                    minikube.k8s.io/name=addons-183000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_25T03_34_13_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 25 Sep 2023 10:34:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-183000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 25 Sep 2023 11:04:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 25 Sep 2023 11:02:55 +0000   Mon, 25 Sep 2023 10:34:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 25 Sep 2023 11:02:55 +0000   Mon, 25 Sep 2023 10:34:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 25 Sep 2023 11:02:55 +0000   Mon, 25 Sep 2023 10:34:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 25 Sep 2023 11:02:55 +0000   Mon, 25 Sep 2023 10:34:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.2
	  Hostname:    addons-183000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	System Info:
	  Machine ID:                 3ec93b0c295a46b69f667e92919bae36
	  System UUID:                3ec93b0c295a46b69f667e92919bae36
	  Boot ID:                    e140f335-14d6-4d36-af6f-4c16a72ee860
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.6
	  Kubelet Version:            v1.28.2
	  Kube-Proxy Version:         v1.28.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  gcp-auth                    gcp-auth-d4c87556c-fgkgk                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         30m
	  headlamp                    headlamp-58b88cff49-kdgv2                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 coredns-5dd5756b68-nj9v5                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     30m
	  kube-system                 etcd-addons-183000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         30m
	  kube-system                 kube-apiserver-addons-183000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         30m
	  kube-system                 kube-controller-manager-addons-183000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         30m
	  kube-system                 kube-proxy-7t7bh                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         30m
	  kube-system                 kube-scheduler-addons-183000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         30m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 30m   kube-proxy       
	  Normal  Starting                 30m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  30m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  30m   kubelet          Node addons-183000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    30m   kubelet          Node addons-183000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     30m   kubelet          Node addons-183000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                30m   kubelet          Node addons-183000 status is now: NodeReady
	  Normal  RegisteredNode           30m   node-controller  Node addons-183000 event: Registered Node addons-183000 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.641440] EINJ: EINJ table not found.
	[  +0.489201] systemd-fstab-generator[117]: Ignoring "noauto" for root device
	[  +0.043090] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000792] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +3.110509] systemd-fstab-generator[482]: Ignoring "noauto" for root device
	[  +0.074666] systemd-fstab-generator[494]: Ignoring "noauto" for root device
	[  +0.418795] systemd-fstab-generator[667]: Ignoring "noauto" for root device
	[  +0.183648] systemd-fstab-generator[704]: Ignoring "noauto" for root device
	[  +0.073331] systemd-fstab-generator[715]: Ignoring "noauto" for root device
	[  +0.088908] systemd-fstab-generator[728]: Ignoring "noauto" for root device
	[  +1.149460] kauditd_printk_skb: 17 callbacks suppressed
	[  +0.104006] systemd-fstab-generator[917]: Ignoring "noauto" for root device
	[  +0.078468] systemd-fstab-generator[928]: Ignoring "noauto" for root device
	[  +0.058376] systemd-fstab-generator[939]: Ignoring "noauto" for root device
	[  +0.070842] systemd-fstab-generator[950]: Ignoring "noauto" for root device
	[  +0.085054] systemd-fstab-generator[991]: Ignoring "noauto" for root device
	[Sep25 10:34] systemd-fstab-generator[1098]: Ignoring "noauto" for root device
	[  +2.191489] kauditd_printk_skb: 29 callbacks suppressed
	[  +2.399489] systemd-fstab-generator[1471]: Ignoring "noauto" for root device
	[  +5.122490] systemd-fstab-generator[2347]: Ignoring "noauto" for root device
	[ +14.463207] kauditd_printk_skb: 41 callbacks suppressed
	[  +6.798894] kauditd_printk_skb: 21 callbacks suppressed
	[  +4.810513] TCP: eth0: Driver has suspect GRO implementation, TCP performance may be compromised.
	[  +3.498700] kauditd_printk_skb: 12 callbacks suppressed
	[Sep25 10:52] kauditd_printk_skb: 5 callbacks suppressed
	
	* 
	* ==> etcd [202a7fdac825] <==
	* {"level":"info","ts":"2023-09-25T10:34:31.937317Z","caller":"traceutil/trace.go:171","msg":"trace[667548922] transaction","detail":"{read_only:false; response_revision:416; number_of_response:1; }","duration":"126.937574ms","start":"2023-09-25T10:34:31.810371Z","end":"2023-09-25T10:34:31.937309Z","steps":["trace[667548922] 'process raft request'  (duration: 126.824018ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-25T10:34:36.99882Z","caller":"traceutil/trace.go:171","msg":"trace[243510449] linearizableReadLoop","detail":"{readStateIndex:482; appliedIndex:481; }","duration":"165.982552ms","start":"2023-09-25T10:34:36.832829Z","end":"2023-09-25T10:34:36.998811Z","steps":["trace[243510449] 'read index received'  (duration: 165.770762ms)","trace[243510449] 'applied index is now lower than readState.Index'  (duration: 211.209µs)"],"step_count":2}
	{"level":"warn","ts":"2023-09-25T10:34:36.998969Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"166.151453ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2023-09-25T10:34:36.999019Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"123.796797ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-5dd5756b68-nj9v5\" ","response":"range_response_count:1 size:5002"}
	{"level":"info","ts":"2023-09-25T10:34:36.999045Z","caller":"traceutil/trace.go:171","msg":"trace[2057756314] range","detail":"{range_begin:/registry/pods/kube-system/coredns-5dd5756b68-nj9v5; range_end:; response_count:1; response_revision:469; }","duration":"123.811177ms","start":"2023-09-25T10:34:36.875219Z","end":"2023-09-25T10:34:36.99903Z","steps":["trace[2057756314] 'agreement among raft nodes before linearized reading'  (duration: 123.788776ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-25T10:34:36.999164Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"162.803156ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:6 size:31393"}
	{"level":"info","ts":"2023-09-25T10:34:36.999205Z","caller":"traceutil/trace.go:171","msg":"trace[1483278895] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:6; response_revision:469; }","duration":"162.834825ms","start":"2023-09-25T10:34:36.836356Z","end":"2023-09-25T10:34:36.99919Z","steps":["trace[1483278895] 'agreement among raft nodes before linearized reading'  (duration: 162.701625ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-25T10:34:36.999Z","caller":"traceutil/trace.go:171","msg":"trace[3634572] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:0; response_revision:469; }","duration":"166.183579ms","start":"2023-09-25T10:34:36.832812Z","end":"2023-09-25T10:34:36.998995Z","steps":["trace[3634572] 'agreement among raft nodes before linearized reading'  (duration: 166.053912ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-25T10:34:36.998947Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"163.574471ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:6 size:31393"}
	{"level":"info","ts":"2023-09-25T10:34:36.999285Z","caller":"traceutil/trace.go:171","msg":"trace[819315326] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:6; response_revision:469; }","duration":"163.922954ms","start":"2023-09-25T10:34:36.83536Z","end":"2023-09-25T10:34:36.999283Z","steps":["trace[819315326] 'agreement among raft nodes before linearized reading'  (duration: 163.541307ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-25T10:44:09.779305Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":608}
	{"level":"info","ts":"2023-09-25T10:44:09.779775Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":608,"took":"346.08µs","hash":977468107}
	{"level":"info","ts":"2023-09-25T10:44:09.779794Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":977468107,"revision":608,"compact-revision":-1}
	{"level":"info","ts":"2023-09-25T10:49:09.783821Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":698}
	{"level":"info","ts":"2023-09-25T10:49:09.784257Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":698,"took":"244.664µs","hash":3592134345}
	{"level":"info","ts":"2023-09-25T10:49:09.784273Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3592134345,"revision":698,"compact-revision":608}
	{"level":"info","ts":"2023-09-25T10:54:09.786046Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":788}
	{"level":"info","ts":"2023-09-25T10:54:09.786391Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":788,"took":"183.456µs","hash":2921777324}
	{"level":"info","ts":"2023-09-25T10:54:09.786401Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2921777324,"revision":788,"compact-revision":698}
	{"level":"info","ts":"2023-09-25T10:59:09.78846Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":921}
	{"level":"info","ts":"2023-09-25T10:59:09.788929Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":921,"took":"220.122µs","hash":3368749381}
	{"level":"info","ts":"2023-09-25T10:59:09.788942Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3368749381,"revision":921,"compact-revision":788}
	{"level":"info","ts":"2023-09-25T11:04:09.790636Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1015}
	{"level":"info","ts":"2023-09-25T11:04:09.790979Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1015,"took":"207.581µs","hash":1495489543}
	{"level":"info","ts":"2023-09-25T11:04:09.790991Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1495489543,"revision":1015,"compact-revision":921}
	
	* 
	* ==> gcp-auth [f0ceeef2fd99] <==
	* 2023/09/25 10:34:44 GCP Auth Webhook started!
	2023/09/25 10:52:28 Ready to marshal response ...
	2023/09/25 10:52:28 Ready to write response ...
	2023/09/25 10:52:28 Ready to marshal response ...
	2023/09/25 10:52:28 Ready to write response ...
	2023/09/25 10:52:28 Ready to marshal response ...
	2023/09/25 10:52:28 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  11:04:51 up 30 min,  0 users,  load average: 0.01, 0.08, 0.09
	Linux addons-183000 5.10.57 #1 SMP PREEMPT Mon Sep 18 20:10:16 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [e38f0c6d58f7] <==
	* I0925 10:34:11.481757       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0925 10:34:11.494186       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0925 10:34:11.536039       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0925 10:34:11.538110       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.105.2]
	I0925 10:34:11.538484       1 controller.go:624] quota admission added evaluator for: endpoints
	I0925 10:34:11.539858       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0925 10:34:12.380709       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0925 10:34:12.885080       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0925 10:34:12.893075       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0925 10:34:12.904077       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0925 10:34:26.498134       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0925 10:34:26.509156       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I0925 10:34:27.526494       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0925 10:34:34.002889       1 alloc.go:330] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.108.108.100"}
	I0925 10:34:34.022823       1 controller.go:624] quota admission added evaluator for: jobs.batch
	I0925 10:39:10.399639       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0925 10:44:10.399758       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0925 10:49:10.400399       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0925 10:52:28.085866       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.104.63.194"}
	I0925 10:54:10.400553       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0925 10:59:10.400827       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0925 11:04:10.400976       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0925 11:04:46.170548       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0925 11:04:46.172109       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0925 11:04:47.178215       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	
	* 
	* ==> kube-controller-manager [e24563a55274] <==
	* E0925 11:02:26.524033       1 pv_controller.go:1562] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found" PVC="default/hpvc"
	I0925 11:02:26.524124       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found"
	E0925 11:02:41.524053       1 pv_controller.go:1562] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found" PVC="default/hpvc"
	I0925 11:02:41.524091       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found"
	E0925 11:02:56.524734       1 pv_controller.go:1562] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found" PVC="default/hpvc"
	I0925 11:02:56.524819       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found"
	E0925 11:03:11.525104       1 pv_controller.go:1562] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found" PVC="default/hpvc"
	I0925 11:03:11.525169       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found"
	E0925 11:03:26.525484       1 pv_controller.go:1562] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found" PVC="default/hpvc"
	I0925 11:03:26.525561       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found"
	E0925 11:03:41.526494       1 pv_controller.go:1562] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found" PVC="default/hpvc"
	I0925 11:03:41.526512       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found"
	E0925 11:03:56.527456       1 pv_controller.go:1562] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found" PVC="default/hpvc"
	I0925 11:03:56.527614       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found"
	E0925 11:04:11.528390       1 pv_controller.go:1562] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found" PVC="default/hpvc"
	I0925 11:04:11.528457       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found"
	E0925 11:04:26.528938       1 pv_controller.go:1562] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found" PVC="default/hpvc"
	I0925 11:04:26.529035       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found"
	E0925 11:04:41.530091       1 pv_controller.go:1562] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found" PVC="default/hpvc"
	I0925 11:04:41.530135       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found"
	E0925 11:04:47.179007       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
	W0925 11:04:48.316002       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0925 11:04:48.316020       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0925 11:04:51.163542       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0925 11:04:51.163590       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	* 
	* ==> kube-proxy [fff72387d957] <==
	* I0925 10:34:27.163880       1 server_others.go:69] "Using iptables proxy"
	I0925 10:34:27.181208       1 node.go:141] Successfully retrieved node IP: 192.168.105.2
	I0925 10:34:27.228178       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0925 10:34:27.228201       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0925 10:34:27.231917       1 server_others.go:152] "Using iptables Proxier"
	I0925 10:34:27.231983       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0925 10:34:27.232100       1 server.go:846] "Version info" version="v1.28.2"
	I0925 10:34:27.232211       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0925 10:34:27.232663       1 config.go:188] "Starting service config controller"
	I0925 10:34:27.232700       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0925 10:34:27.232734       1 config.go:97] "Starting endpoint slice config controller"
	I0925 10:34:27.232760       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0925 10:34:27.233047       1 config.go:315] "Starting node config controller"
	I0925 10:34:27.233085       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0925 10:34:27.333424       1 shared_informer.go:318] Caches are synced for node config
	I0925 10:34:27.333462       1 shared_informer.go:318] Caches are synced for service config
	I0925 10:34:27.333490       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [5a87dfcd0e1a] <==
	* W0925 10:34:10.412769       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0925 10:34:10.413000       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0925 10:34:10.412552       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0925 10:34:10.413020       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0925 10:34:10.412572       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0925 10:34:10.413082       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0925 10:34:10.412878       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0925 10:34:10.413107       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0925 10:34:11.233945       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0925 10:34:11.233969       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0925 10:34:11.245555       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0925 10:34:11.245565       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0925 10:34:11.257234       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0925 10:34:11.257245       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0925 10:34:11.305366       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0925 10:34:11.305376       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0925 10:34:11.335532       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0925 10:34:11.335546       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0925 10:34:11.379250       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0925 10:34:11.379349       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0925 10:34:11.401540       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0925 10:34:11.401585       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0925 10:34:11.494359       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0925 10:34:11.494379       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0925 10:34:13.407721       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-09-25 10:33:55 UTC, ends at Mon 2023-09-25 11:04:51 UTC. --
	Sep 25 11:04:46 addons-183000 kubelet[2366]: I0925 11:04:46.395180    2366 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/94a4278a-2b52-4344-a640-8a01a54306c2-host" (OuterVolumeSpecName: "host") pod "94a4278a-2b52-4344-a640-8a01a54306c2" (UID: "94a4278a-2b52-4344-a640-8a01a54306c2"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 25 11:04:46 addons-183000 kubelet[2366]: I0925 11:04:46.395186    2366 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/94a4278a-2b52-4344-a640-8a01a54306c2-run\") pod \"94a4278a-2b52-4344-a640-8a01a54306c2\" (UID: \"94a4278a-2b52-4344-a640-8a01a54306c2\") "
	Sep 25 11:04:46 addons-183000 kubelet[2366]: I0925 11:04:46.395196    2366 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cgroup\" (UniqueName: \"kubernetes.io/host-path/94a4278a-2b52-4344-a640-8a01a54306c2-cgroup\") pod \"94a4278a-2b52-4344-a640-8a01a54306c2\" (UID: \"94a4278a-2b52-4344-a640-8a01a54306c2\") "
	Sep 25 11:04:46 addons-183000 kubelet[2366]: I0925 11:04:46.395201    2366 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/94a4278a-2b52-4344-a640-8a01a54306c2-run" (OuterVolumeSpecName: "run") pod "94a4278a-2b52-4344-a640-8a01a54306c2" (UID: "94a4278a-2b52-4344-a640-8a01a54306c2"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 25 11:04:46 addons-183000 kubelet[2366]: I0925 11:04:46.395205    2366 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/94a4278a-2b52-4344-a640-8a01a54306c2-bpffs\") pod \"94a4278a-2b52-4344-a640-8a01a54306c2\" (UID: \"94a4278a-2b52-4344-a640-8a01a54306c2\") "
	Sep 25 11:04:46 addons-183000 kubelet[2366]: I0925 11:04:46.395215    2366 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/94a4278a-2b52-4344-a640-8a01a54306c2-cgroup" (OuterVolumeSpecName: "cgroup") pod "94a4278a-2b52-4344-a640-8a01a54306c2" (UID: "94a4278a-2b52-4344-a640-8a01a54306c2"). InnerVolumeSpecName "cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 25 11:04:46 addons-183000 kubelet[2366]: I0925 11:04:46.395218    2366 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bqwl5\" (UniqueName: \"kubernetes.io/projected/94a4278a-2b52-4344-a640-8a01a54306c2-kube-api-access-bqwl5\") pod \"94a4278a-2b52-4344-a640-8a01a54306c2\" (UID: \"94a4278a-2b52-4344-a640-8a01a54306c2\") "
	Sep 25 11:04:46 addons-183000 kubelet[2366]: I0925 11:04:46.395223    2366 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/94a4278a-2b52-4344-a640-8a01a54306c2-bpffs" (OuterVolumeSpecName: "bpffs") pod "94a4278a-2b52-4344-a640-8a01a54306c2" (UID: "94a4278a-2b52-4344-a640-8a01a54306c2"). InnerVolumeSpecName "bpffs". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 25 11:04:46 addons-183000 kubelet[2366]: I0925 11:04:46.395237    2366 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"modules\" (UniqueName: \"kubernetes.io/host-path/94a4278a-2b52-4344-a640-8a01a54306c2-modules\") pod \"94a4278a-2b52-4344-a640-8a01a54306c2\" (UID: \"94a4278a-2b52-4344-a640-8a01a54306c2\") "
	Sep 25 11:04:46 addons-183000 kubelet[2366]: I0925 11:04:46.395246    2366 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"debugfs\" (UniqueName: \"kubernetes.io/host-path/94a4278a-2b52-4344-a640-8a01a54306c2-debugfs\") pod \"94a4278a-2b52-4344-a640-8a01a54306c2\" (UID: \"94a4278a-2b52-4344-a640-8a01a54306c2\") "
	Sep 25 11:04:46 addons-183000 kubelet[2366]: I0925 11:04:46.395264    2366 reconciler_common.go:300] "Volume detached for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/94a4278a-2b52-4344-a640-8a01a54306c2-bpffs\") on node \"addons-183000\" DevicePath \"\""
	Sep 25 11:04:46 addons-183000 kubelet[2366]: I0925 11:04:46.395271    2366 reconciler_common.go:300] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/94a4278a-2b52-4344-a640-8a01a54306c2-host\") on node \"addons-183000\" DevicePath \"\""
	Sep 25 11:04:46 addons-183000 kubelet[2366]: I0925 11:04:46.395275    2366 reconciler_common.go:300] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/94a4278a-2b52-4344-a640-8a01a54306c2-run\") on node \"addons-183000\" DevicePath \"\""
	Sep 25 11:04:46 addons-183000 kubelet[2366]: I0925 11:04:46.395279    2366 reconciler_common.go:300] "Volume detached for volume \"cgroup\" (UniqueName: \"kubernetes.io/host-path/94a4278a-2b52-4344-a640-8a01a54306c2-cgroup\") on node \"addons-183000\" DevicePath \"\""
	Sep 25 11:04:46 addons-183000 kubelet[2366]: I0925 11:04:46.395288    2366 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/94a4278a-2b52-4344-a640-8a01a54306c2-debugfs" (OuterVolumeSpecName: "debugfs") pod "94a4278a-2b52-4344-a640-8a01a54306c2" (UID: "94a4278a-2b52-4344-a640-8a01a54306c2"). InnerVolumeSpecName "debugfs". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 25 11:04:46 addons-183000 kubelet[2366]: I0925 11:04:46.395319    2366 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/94a4278a-2b52-4344-a640-8a01a54306c2-modules" (OuterVolumeSpecName: "modules") pod "94a4278a-2b52-4344-a640-8a01a54306c2" (UID: "94a4278a-2b52-4344-a640-8a01a54306c2"). InnerVolumeSpecName "modules". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 25 11:04:46 addons-183000 kubelet[2366]: I0925 11:04:46.395822    2366 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94a4278a-2b52-4344-a640-8a01a54306c2-kube-api-access-bqwl5" (OuterVolumeSpecName: "kube-api-access-bqwl5") pod "94a4278a-2b52-4344-a640-8a01a54306c2" (UID: "94a4278a-2b52-4344-a640-8a01a54306c2"). InnerVolumeSpecName "kube-api-access-bqwl5". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 25 11:04:46 addons-183000 kubelet[2366]: I0925 11:04:46.496112    2366 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-bqwl5\" (UniqueName: \"kubernetes.io/projected/94a4278a-2b52-4344-a640-8a01a54306c2-kube-api-access-bqwl5\") on node \"addons-183000\" DevicePath \"\""
	Sep 25 11:04:46 addons-183000 kubelet[2366]: I0925 11:04:46.496137    2366 reconciler_common.go:300] "Volume detached for volume \"modules\" (UniqueName: \"kubernetes.io/host-path/94a4278a-2b52-4344-a640-8a01a54306c2-modules\") on node \"addons-183000\" DevicePath \"\""
	Sep 25 11:04:46 addons-183000 kubelet[2366]: I0925 11:04:46.496143    2366 reconciler_common.go:300] "Volume detached for volume \"debugfs\" (UniqueName: \"kubernetes.io/host-path/94a4278a-2b52-4344-a640-8a01a54306c2-debugfs\") on node \"addons-183000\" DevicePath \"\""
	Sep 25 11:04:47 addons-183000 kubelet[2366]: I0925 11:04:47.160182    2366 scope.go:117] "RemoveContainer" containerID="3214d7d3645b319f499cd4e473783f22248ab293d0b7bd09221747894dd5ebed"
	Sep 25 11:04:47 addons-183000 kubelet[2366]: I0925 11:04:47.169344    2366 scope.go:117] "RemoveContainer" containerID="3214d7d3645b319f499cd4e473783f22248ab293d0b7bd09221747894dd5ebed"
	Sep 25 11:04:47 addons-183000 kubelet[2366]: E0925 11:04:47.169739    2366 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 3214d7d3645b319f499cd4e473783f22248ab293d0b7bd09221747894dd5ebed" containerID="3214d7d3645b319f499cd4e473783f22248ab293d0b7bd09221747894dd5ebed"
	Sep 25 11:04:47 addons-183000 kubelet[2366]: I0925 11:04:47.169783    2366 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"3214d7d3645b319f499cd4e473783f22248ab293d0b7bd09221747894dd5ebed"} err="failed to get container status \"3214d7d3645b319f499cd4e473783f22248ab293d0b7bd09221747894dd5ebed\": rpc error: code = Unknown desc = Error response from daemon: No such container: 3214d7d3645b319f499cd4e473783f22248ab293d0b7bd09221747894dd5ebed"
	Sep 25 11:04:48 addons-183000 kubelet[2366]: I0925 11:04:48.959305    2366 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="94a4278a-2b52-4344-a640-8a01a54306c2" path="/var/lib/kubelet/pods/94a4278a-2b52-4344-a640-8a01a54306c2/volumes"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p addons-183000 -n addons-183000
helpers_test.go:261: (dbg) Run:  kubectl --context addons-183000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (0.77s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (720.82s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:381: failed waiting for metrics-server deployment to stabilize: timed out waiting for the condition
addons_test.go:383: metrics-server stabilized in 6m0.002205667s
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
addons_test.go:385: ***** TestAddons/parallel/MetricsServer: pod "k8s-app=metrics-server" failed to start within 6m0s: context deadline exceeded ****
addons_test.go:385: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p addons-183000 -n addons-183000
addons_test.go:385: TestAddons/parallel/MetricsServer: showing logs for failed pods as of 2023-09-25 04:06:04.701946 -0700 PDT m=+1964.578435585
addons_test.go:386: failed waiting for k8s-app=metrics-server pod: k8s-app=metrics-server within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p addons-183000 -n addons-183000
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p addons-183000 logs -n 25
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-427000 | jenkins | v1.31.2 | 25 Sep 23 03:33 PDT |                     |
	|         | -p download-only-427000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| start   | -o=json --download-only        | download-only-427000 | jenkins | v1.31.2 | 25 Sep 23 03:33 PDT |                     |
	|         | -p download-only-427000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.31.2 | 25 Sep 23 03:33 PDT | 25 Sep 23 03:33 PDT |
	| delete  | -p download-only-427000        | download-only-427000 | jenkins | v1.31.2 | 25 Sep 23 03:33 PDT | 25 Sep 23 03:33 PDT |
	| delete  | -p download-only-427000        | download-only-427000 | jenkins | v1.31.2 | 25 Sep 23 03:33 PDT | 25 Sep 23 03:33 PDT |
	| start   | --download-only -p             | binary-mirror-317000 | jenkins | v1.31.2 | 25 Sep 23 03:33 PDT |                     |
	|         | binary-mirror-317000           |                      |         |         |                     |                     |
	|         | --alsologtostderr              |                      |         |         |                     |                     |
	|         | --binary-mirror                |                      |         |         |                     |                     |
	|         | http://127.0.0.1:49310         |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-317000        | binary-mirror-317000 | jenkins | v1.31.2 | 25 Sep 23 03:33 PDT | 25 Sep 23 03:33 PDT |
	| start   | -p addons-183000               | addons-183000        | jenkins | v1.31.2 | 25 Sep 23 03:33 PDT | 25 Sep 23 03:40 PDT |
	|         | --wait=true --memory=4000      |                      |         |         |                     |                     |
	|         | --alsologtostderr              |                      |         |         |                     |                     |
	|         | --addons=registry              |                      |         |         |                     |                     |
	|         | --addons=metrics-server        |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots       |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver   |                      |         |         |                     |                     |
	|         | --addons=gcp-auth              |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner         |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget      |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|         | --addons=ingress               |                      |         |         |                     |                     |
	|         | --addons=ingress-dns           |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p       | addons-183000        | jenkins | v1.31.2 | 25 Sep 23 03:52 PDT |                     |
	|         | addons-183000                  |                      |         |         |                     |                     |
	| addons  | enable headlamp                | addons-183000        | jenkins | v1.31.2 | 25 Sep 23 03:52 PDT | 25 Sep 23 03:52 PDT |
	|         | -p addons-183000               |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p    | addons-183000        | jenkins | v1.31.2 | 25 Sep 23 04:04 PDT | 25 Sep 23 04:04 PDT |
	|         | addons-183000                  |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/25 03:33:43
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.21.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0925 03:33:43.113263    1555 out.go:296] Setting OutFile to fd 1 ...
	I0925 03:33:43.113390    1555 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 03:33:43.113393    1555 out.go:309] Setting ErrFile to fd 2...
	I0925 03:33:43.113395    1555 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 03:33:43.113522    1555 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17297-1010/.minikube/bin
	I0925 03:33:43.114539    1555 out.go:303] Setting JSON to false
	I0925 03:33:43.129689    1555 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":198,"bootTime":1695637825,"procs":391,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.6","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0925 03:33:43.129759    1555 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0925 03:33:43.134529    1555 out.go:177] * [addons-183000] minikube v1.31.2 on Darwin 13.6 (arm64)
	I0925 03:33:43.141636    1555 out.go:177]   - MINIKUBE_LOCATION=17297
	I0925 03:33:43.145595    1555 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17297-1010/kubeconfig
	I0925 03:33:43.141675    1555 notify.go:220] Checking for updates...
	I0925 03:33:43.149882    1555 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0925 03:33:43.152528    1555 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 03:33:43.155561    1555 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17297-1010/.minikube
	I0925 03:33:43.158461    1555 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0925 03:33:43.161685    1555 driver.go:373] Setting default libvirt URI to qemu:///system
	I0925 03:33:43.165518    1555 out.go:177] * Using the qemu2 driver based on user configuration
	I0925 03:33:43.170494    1555 start.go:298] selected driver: qemu2
	I0925 03:33:43.170500    1555 start.go:902] validating driver "qemu2" against <nil>
	I0925 03:33:43.170505    1555 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0925 03:33:43.172415    1555 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0925 03:33:43.175485    1555 out.go:177] * Automatically selected the socket_vmnet network
	I0925 03:33:43.178631    1555 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0925 03:33:43.178656    1555 cni.go:84] Creating CNI manager for ""
	I0925 03:33:43.178667    1555 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0925 03:33:43.178671    1555 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0925 03:33:43.178683    1555 start_flags.go:321] config:
	{Name:addons-183000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:addons-183000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID
:0 AutoPauseInterval:1m0s}
	I0925 03:33:43.182821    1555 iso.go:125] acquiring lock: {Name:mkf881a60cf9fd1672567914305ff6f7a4f13809 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 03:33:43.186491    1555 out.go:177] * Starting control plane node addons-183000 in cluster addons-183000
	I0925 03:33:43.194499    1555 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0925 03:33:43.194520    1555 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I0925 03:33:43.194535    1555 cache.go:57] Caching tarball of preloaded images
	I0925 03:33:43.194599    1555 preload.go:174] Found /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0925 03:33:43.194605    1555 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I0925 03:33:43.194819    1555 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/config.json ...
	I0925 03:33:43.194831    1555 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/config.json: {Name:mk49657fba0a0e3293097f9bbbd8574691cb2471 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 03:33:43.195036    1555 start.go:365] acquiring machines lock for addons-183000: {Name:mk918d99818e2cf21e5912bc291ed18d4b442ba6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 03:33:43.195158    1555 start.go:369] acquired machines lock for "addons-183000" in 116.458µs
	I0925 03:33:43.195167    1555 start.go:93] Provisioning new machine with config: &{Name:addons-183000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.2 ClusterName:addons-183000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 03:33:43.195202    1555 start.go:125] createHost starting for "" (driver="qemu2")
	I0925 03:33:43.203570    1555 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0925 03:33:43.526310    1555 start.go:159] libmachine.API.Create for "addons-183000" (driver="qemu2")
	I0925 03:33:43.526360    1555 client.go:168] LocalClient.Create starting
	I0925 03:33:43.526524    1555 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/ca.pem
	I0925 03:33:43.685162    1555 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/cert.pem
	I0925 03:33:43.725069    1555 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17297-1010/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I0925 03:33:44.270899    1555 main.go:141] libmachine: Creating SSH key...
	I0925 03:33:44.356373    1555 main.go:141] libmachine: Creating Disk image...
	I0925 03:33:44.356381    1555 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0925 03:33:44.356565    1555 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/disk.qcow2.raw /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/disk.qcow2
	I0925 03:33:44.389562    1555 main.go:141] libmachine: STDOUT: 
	I0925 03:33:44.389584    1555 main.go:141] libmachine: STDERR: 
	I0925 03:33:44.389658    1555 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/disk.qcow2 +20000M
	I0925 03:33:44.397120    1555 main.go:141] libmachine: STDOUT: Image resized.
	
	I0925 03:33:44.397139    1555 main.go:141] libmachine: STDERR: 
	I0925 03:33:44.397152    1555 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/disk.qcow2
	I0925 03:33:44.397157    1555 main.go:141] libmachine: Starting QEMU VM...
	I0925 03:33:44.397194    1555 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:70:b3:50:3d:bc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/disk.qcow2
	I0925 03:33:44.464471    1555 main.go:141] libmachine: STDOUT: 
	I0925 03:33:44.464499    1555 main.go:141] libmachine: STDERR: 
	I0925 03:33:44.464503    1555 main.go:141] libmachine: Attempt 0
	I0925 03:33:44.464522    1555 main.go:141] libmachine: Searching for 4e:70:b3:50:3d:bc in /var/db/dhcpd_leases ...
	I0925 03:33:46.465678    1555 main.go:141] libmachine: Attempt 1
	I0925 03:33:46.465761    1555 main.go:141] libmachine: Searching for 4e:70:b3:50:3d:bc in /var/db/dhcpd_leases ...
	I0925 03:33:48.467021    1555 main.go:141] libmachine: Attempt 2
	I0925 03:33:48.467061    1555 main.go:141] libmachine: Searching for 4e:70:b3:50:3d:bc in /var/db/dhcpd_leases ...
	I0925 03:33:50.468194    1555 main.go:141] libmachine: Attempt 3
	I0925 03:33:50.468212    1555 main.go:141] libmachine: Searching for 4e:70:b3:50:3d:bc in /var/db/dhcpd_leases ...
	I0925 03:33:52.469241    1555 main.go:141] libmachine: Attempt 4
	I0925 03:33:52.469258    1555 main.go:141] libmachine: Searching for 4e:70:b3:50:3d:bc in /var/db/dhcpd_leases ...
	I0925 03:33:54.470316    1555 main.go:141] libmachine: Attempt 5
	I0925 03:33:54.470352    1555 main.go:141] libmachine: Searching for 4e:70:b3:50:3d:bc in /var/db/dhcpd_leases ...
	I0925 03:33:56.471428    1555 main.go:141] libmachine: Attempt 6
	I0925 03:33:56.471461    1555 main.go:141] libmachine: Searching for 4e:70:b3:50:3d:bc in /var/db/dhcpd_leases ...
	I0925 03:33:56.471625    1555 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0925 03:33:56.471679    1555 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:4e:70:b3:50:3d:bc ID:1,4e:70:b3:50:3d:bc Lease:0x6512b393}
	I0925 03:33:56.471685    1555 main.go:141] libmachine: Found match: 4e:70:b3:50:3d:bc
	I0925 03:33:56.471705    1555 main.go:141] libmachine: IP: 192.168.105.2
	I0925 03:33:56.471714    1555 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.2)...
	I0925 03:33:57.476002    1555 machine.go:88] provisioning docker machine ...
	I0925 03:33:57.476029    1555 buildroot.go:166] provisioning hostname "addons-183000"
	I0925 03:33:57.476399    1555 main.go:141] libmachine: Using SSH client type: native
	I0925 03:33:57.476656    1555 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100de8760] 0x100deaed0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0925 03:33:57.476663    1555 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-183000 && echo "addons-183000" | sudo tee /etc/hostname
	I0925 03:33:57.549226    1555 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-183000
	
	I0925 03:33:57.549294    1555 main.go:141] libmachine: Using SSH client type: native
	I0925 03:33:57.549565    1555 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100de8760] 0x100deaed0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0925 03:33:57.549580    1555 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-183000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-183000/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-183000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0925 03:33:57.619664    1555 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0925 03:33:57.619678    1555 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17297-1010/.minikube CaCertPath:/Users/jenkins/minikube-integration/17297-1010/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17297-1010/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17297-1010/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17297-1010/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17297-1010/.minikube}
	I0925 03:33:57.619692    1555 buildroot.go:174] setting up certificates
	I0925 03:33:57.619698    1555 provision.go:83] configureAuth start
	I0925 03:33:57.619702    1555 provision.go:138] copyHostCerts
	I0925 03:33:57.619800    1555 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17297-1010/.minikube/key.pem (1679 bytes)
	I0925 03:33:57.620015    1555 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17297-1010/.minikube/ca.pem (1082 bytes)
	I0925 03:33:57.620106    1555 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17297-1010/.minikube/cert.pem (1123 bytes)
	I0925 03:33:57.620180    1555 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17297-1010/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17297-1010/.minikube/certs/ca-key.pem org=jenkins.addons-183000 san=[192.168.105.2 192.168.105.2 localhost 127.0.0.1 minikube addons-183000]
	I0925 03:33:57.680529    1555 provision.go:172] copyRemoteCerts
	I0925 03:33:57.680584    1555 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0925 03:33:57.680600    1555 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/id_rsa Username:docker}
	I0925 03:33:57.716693    1555 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0925 03:33:57.724070    1555 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0925 03:33:57.731348    1555 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0925 03:33:57.738044    1555 provision.go:86] duration metric: configureAuth took 118.340875ms
	I0925 03:33:57.738067    1555 buildroot.go:189] setting minikube options for container-runtime
	I0925 03:33:57.738181    1555 config.go:182] Loaded profile config "addons-183000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0925 03:33:57.738225    1555 main.go:141] libmachine: Using SSH client type: native
	I0925 03:33:57.738442    1555 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100de8760] 0x100deaed0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0925 03:33:57.738446    1555 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0925 03:33:57.806528    1555 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0925 03:33:57.806536    1555 buildroot.go:70] root file system type: tmpfs
	I0925 03:33:57.806591    1555 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0925 03:33:57.806639    1555 main.go:141] libmachine: Using SSH client type: native
	I0925 03:33:57.806901    1555 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100de8760] 0x100deaed0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0925 03:33:57.806939    1555 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0925 03:33:57.879305    1555 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0925 03:33:57.879349    1555 main.go:141] libmachine: Using SSH client type: native
	I0925 03:33:57.879600    1555 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100de8760] 0x100deaed0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0925 03:33:57.879612    1555 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0925 03:33:58.218156    1555 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0925 03:33:58.218176    1555 machine.go:91] provisioned docker machine in 742.178459ms
	I0925 03:33:58.218184    1555 client.go:171] LocalClient.Create took 14.692090292s
	I0925 03:33:58.218196    1555 start.go:167] duration metric: libmachine.API.Create for "addons-183000" took 14.692162542s
	I0925 03:33:58.218201    1555 start.go:300] post-start starting for "addons-183000" (driver="qemu2")
	I0925 03:33:58.218213    1555 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0925 03:33:58.218288    1555 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0925 03:33:58.218298    1555 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/id_rsa Username:docker}
	I0925 03:33:58.255037    1555 ssh_runner.go:195] Run: cat /etc/os-release
	I0925 03:33:58.256454    1555 info.go:137] Remote host: Buildroot 2021.02.12
	I0925 03:33:58.256461    1555 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17297-1010/.minikube/addons for local assets ...
	I0925 03:33:58.256533    1555 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17297-1010/.minikube/files for local assets ...
	I0925 03:33:58.256562    1555 start.go:303] post-start completed in 38.354459ms
	I0925 03:33:58.256920    1555 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/config.json ...
	I0925 03:33:58.257077    1555 start.go:128] duration metric: createHost completed in 15.062148875s
	I0925 03:33:58.257104    1555 main.go:141] libmachine: Using SSH client type: native
	I0925 03:33:58.257337    1555 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100de8760] 0x100deaed0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0925 03:33:58.257341    1555 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0925 03:33:58.325173    1555 main.go:141] libmachine: SSH cmd err, output: <nil>: 1695638038.462407626
	
	I0925 03:33:58.325184    1555 fix.go:206] guest clock: 1695638038.462407626
	I0925 03:33:58.325188    1555 fix.go:219] Guest: 2023-09-25 03:33:58.462407626 -0700 PDT Remote: 2023-09-25 03:33:58.257082 -0700 PDT m=+15.162425626 (delta=205.325626ms)
	I0925 03:33:58.325199    1555 fix.go:190] guest clock delta is within tolerance: 205.325626ms
	I0925 03:33:58.325201    1555 start.go:83] releasing machines lock for "addons-183000", held for 15.130317917s
	I0925 03:33:58.325486    1555 ssh_runner.go:195] Run: cat /version.json
	I0925 03:33:58.325494    1555 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/id_rsa Username:docker}
	I0925 03:33:58.325516    1555 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0925 03:33:58.325555    1555 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/id_rsa Username:docker}
	I0925 03:33:58.361340    1555 ssh_runner.go:195] Run: systemctl --version
	I0925 03:33:58.402839    1555 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0925 03:33:58.404630    1555 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0925 03:33:58.404664    1555 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0925 03:33:58.409389    1555 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0925 03:33:58.409398    1555 start.go:469] detecting cgroup driver to use...
	I0925 03:33:58.409504    1555 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0925 03:33:58.414731    1555 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0925 03:33:58.417759    1555 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0925 03:33:58.420882    1555 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0925 03:33:58.420905    1555 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0925 03:33:58.424376    1555 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0925 03:33:58.427971    1555 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0925 03:33:58.431438    1555 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0925 03:33:58.434555    1555 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0925 03:33:58.437481    1555 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0925 03:33:58.440650    1555 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0925 03:33:58.444117    1555 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0925 03:33:58.446963    1555 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0925 03:33:58.506828    1555 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0925 03:33:58.515326    1555 start.go:469] detecting cgroup driver to use...
	I0925 03:33:58.515396    1555 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0925 03:33:58.520350    1555 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0925 03:33:58.525290    1555 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0925 03:33:58.532641    1555 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0925 03:33:58.537661    1555 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0925 03:33:58.542291    1555 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0925 03:33:58.583433    1555 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0925 03:33:58.588627    1555 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0925 03:33:58.594011    1555 ssh_runner.go:195] Run: which cri-dockerd
	I0925 03:33:58.595317    1555 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0925 03:33:58.597772    1555 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0925 03:33:58.602614    1555 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0925 03:33:58.687592    1555 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0925 03:33:58.763371    1555 docker.go:554] configuring docker to use "cgroupfs" as cgroup driver...
	I0925 03:33:58.763431    1555 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0925 03:33:58.768807    1555 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0925 03:33:58.850856    1555 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0925 03:34:00.021109    1555 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.170257708s)
	I0925 03:34:00.021184    1555 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0925 03:34:00.102397    1555 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0925 03:34:00.182389    1555 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0925 03:34:00.242288    1555 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0925 03:34:00.310048    1555 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0925 03:34:00.320927    1555 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0925 03:34:00.397773    1555 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0925 03:34:00.421934    1555 start.go:516] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0925 03:34:00.422022    1555 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0925 03:34:00.424107    1555 start.go:537] Will wait 60s for crictl version
	I0925 03:34:00.424134    1555 ssh_runner.go:195] Run: which crictl
	I0925 03:34:00.425400    1555 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0925 03:34:00.448268    1555 start.go:553] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.6
	RuntimeApiVersion:  v1
	I0925 03:34:00.448328    1555 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0925 03:34:00.458640    1555 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0925 03:34:00.474285    1555 out.go:204] * Preparing Kubernetes v1.28.2 on Docker 24.0.6 ...
	I0925 03:34:00.474362    1555 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0925 03:34:00.475766    1555 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0925 03:34:00.479918    1555 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0925 03:34:00.479959    1555 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0925 03:34:00.485137    1555 docker.go:664] Got preloaded images: 
	I0925 03:34:00.485144    1555 docker.go:670] registry.k8s.io/kube-apiserver:v1.28.2 wasn't preloaded
	I0925 03:34:00.485184    1555 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0925 03:34:00.488328    1555 ssh_runner.go:195] Run: which lz4
	I0925 03:34:00.489753    1555 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0925 03:34:00.490946    1555 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0925 03:34:00.490958    1555 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (356993689 bytes)
	I0925 03:34:01.821604    1555 docker.go:628] Took 1.331913 seconds to copy over tarball
	I0925 03:34:01.821663    1555 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0925 03:34:02.850635    1555 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.028977875s)
	I0925 03:34:02.850646    1555 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0925 03:34:02.866214    1555 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0925 03:34:02.869216    1555 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0925 03:34:02.874196    1555 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0925 03:34:02.955148    1555 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0925 03:34:05.167252    1555 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.212127209s)
	I0925 03:34:05.167356    1555 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0925 03:34:05.173293    1555 docker.go:664] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.2
	registry.k8s.io/kube-scheduler:v1.28.2
	registry.k8s.io/kube-proxy:v1.28.2
	registry.k8s.io/kube-controller-manager:v1.28.2
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0925 03:34:05.173304    1555 cache_images.go:84] Images are preloaded, skipping loading
	I0925 03:34:05.173372    1555 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0925 03:34:05.180961    1555 cni.go:84] Creating CNI manager for ""
	I0925 03:34:05.180975    1555 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0925 03:34:05.180995    1555 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0925 03:34:05.181006    1555 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.2 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-183000 NodeName:addons-183000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0925 03:34:05.181071    1555 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-183000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0925 03:34:05.181111    1555 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=addons-183000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:addons-183000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0925 03:34:05.181162    1555 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I0925 03:34:05.184441    1555 binaries.go:44] Found k8s binaries, skipping transfer
	I0925 03:34:05.184477    1555 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0925 03:34:05.187654    1555 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I0925 03:34:05.192980    1555 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0925 03:34:05.197983    1555 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2099 bytes)
	I0925 03:34:05.202799    1555 ssh_runner.go:195] Run: grep 192.168.105.2	control-plane.minikube.internal$ /etc/hosts
	I0925 03:34:05.204148    1555 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0925 03:34:05.208295    1555 certs.go:56] Setting up /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000 for IP: 192.168.105.2
	I0925 03:34:05.208303    1555 certs.go:190] acquiring lock for shared ca certs: {Name:mk095b03680bcdeba6c321a9f458c9fbafa67639 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 03:34:05.208463    1555 certs.go:204] generating minikubeCA CA: /Users/jenkins/minikube-integration/17297-1010/.minikube/ca.key
	I0925 03:34:05.279404    1555 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17297-1010/.minikube/ca.crt ...
	I0925 03:34:05.279413    1555 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17297-1010/.minikube/ca.crt: {Name:mk70f9fc8ba800117a8a8b4d751d3a98c619cb54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 03:34:05.279591    1555 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17297-1010/.minikube/ca.key ...
	I0925 03:34:05.279595    1555 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17297-1010/.minikube/ca.key: {Name:mkd44aa01a2f3e5b978643c9a3feb1028c2bb791 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 03:34:05.279712    1555 certs.go:204] generating proxyClientCA CA: /Users/jenkins/minikube-integration/17297-1010/.minikube/proxy-client-ca.key
	I0925 03:34:05.342350    1555 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17297-1010/.minikube/proxy-client-ca.crt ...
	I0925 03:34:05.342356    1555 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17297-1010/.minikube/proxy-client-ca.crt: {Name:mkc0af119bea050a868312bfe8f89d742604990c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 03:34:05.342558    1555 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17297-1010/.minikube/proxy-client-ca.key ...
	I0925 03:34:05.342563    1555 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17297-1010/.minikube/proxy-client-ca.key: {Name:mka9b8c6393173e2358c8b84eb9bff6ea6851f33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 03:34:05.342694    1555 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/client.key
	I0925 03:34:05.342700    1555 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/client.crt with IP's: []
	I0925 03:34:05.380999    1555 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/client.crt ...
	I0925 03:34:05.381013    1555 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/client.crt: {Name:mkec4b98dbbfb657baac4f5fae18fe43bd8b5970 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 03:34:05.381125    1555 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/client.key ...
	I0925 03:34:05.381130    1555 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/client.key: {Name:mk8be81ea1673fa1894559e8faa2fa2323674614 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 03:34:05.381227    1555 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/apiserver.key.96055969
	I0925 03:34:05.381235    1555 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/apiserver.crt.96055969 with IP's: [192.168.105.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0925 03:34:05.441721    1555 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/apiserver.crt.96055969 ...
	I0925 03:34:05.441725    1555 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/apiserver.crt.96055969: {Name:mkba38dc1a56241112b86d1503bca4f2588c1bf7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 03:34:05.441849    1555 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/apiserver.key.96055969 ...
	I0925 03:34:05.441852    1555 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/apiserver.key.96055969: {Name:mk41423e9550dcb3371da4467db52078d1bb4d78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 03:34:05.441956    1555 certs.go:337] copying /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/apiserver.crt.96055969 -> /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/apiserver.crt
	I0925 03:34:05.442053    1555 certs.go:341] copying /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/apiserver.key.96055969 -> /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/apiserver.key
	I0925 03:34:05.442146    1555 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/proxy-client.key
	I0925 03:34:05.442154    1555 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/proxy-client.crt with IP's: []
	I0925 03:34:05.578079    1555 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/proxy-client.crt ...
	I0925 03:34:05.578082    1555 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/proxy-client.crt: {Name:mkbd132fd7a0f2cb28d572f95bd43c9a1ef215f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 03:34:05.578216    1555 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/proxy-client.key ...
	I0925 03:34:05.578218    1555 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/proxy-client.key: {Name:mkf93f480df65e887c0e782806fe1d821d05370d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 03:34:05.578436    1555 certs.go:437] found cert: /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/Users/jenkins/minikube-integration/17297-1010/.minikube/certs/ca-key.pem (1675 bytes)
	I0925 03:34:05.578458    1555 certs.go:437] found cert: /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/Users/jenkins/minikube-integration/17297-1010/.minikube/certs/ca.pem (1082 bytes)
	I0925 03:34:05.578479    1555 certs.go:437] found cert: /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/Users/jenkins/minikube-integration/17297-1010/.minikube/certs/cert.pem (1123 bytes)
	I0925 03:34:05.578499    1555 certs.go:437] found cert: /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/Users/jenkins/minikube-integration/17297-1010/.minikube/certs/key.pem (1679 bytes)
	I0925 03:34:05.578876    1555 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0925 03:34:05.587435    1555 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0925 03:34:05.594545    1555 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0925 03:34:05.601433    1555 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0925 03:34:05.608504    1555 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1010/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0925 03:34:05.616247    1555 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1010/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0925 03:34:05.623555    1555 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1010/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0925 03:34:05.630877    1555 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1010/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0925 03:34:05.637827    1555 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1010/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0925 03:34:05.644421    1555 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0925 03:34:05.650432    1555 ssh_runner.go:195] Run: openssl version
	I0925 03:34:05.652383    1555 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0925 03:34:05.655860    1555 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0925 03:34:05.657450    1555 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 25 10:34 /usr/share/ca-certificates/minikubeCA.pem
	I0925 03:34:05.657472    1555 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0925 03:34:05.659354    1555 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0925 03:34:05.662355    1555 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0925 03:34:05.663775    1555 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0925 03:34:05.663811    1555 kubeadm.go:404] StartCluster: {Name:addons-183000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.2 ClusterName:addons-183000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0925 03:34:05.663875    1555 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0925 03:34:05.669363    1555 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0925 03:34:05.672641    1555 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0925 03:34:05.675788    1555 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0925 03:34:05.678955    1555 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0925 03:34:05.678977    1555 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0925 03:34:05.700129    1555 kubeadm.go:322] [init] Using Kubernetes version: v1.28.2
	I0925 03:34:05.700165    1555 kubeadm.go:322] [preflight] Running pre-flight checks
	I0925 03:34:05.762507    1555 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0925 03:34:05.762580    1555 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0925 03:34:05.762631    1555 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0925 03:34:05.856523    1555 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0925 03:34:05.862696    1555 out.go:204]   - Generating certificates and keys ...
	I0925 03:34:05.862744    1555 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0925 03:34:05.862781    1555 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0925 03:34:05.954799    1555 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0925 03:34:06.088347    1555 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0925 03:34:06.179074    1555 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0925 03:34:06.367263    1555 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0925 03:34:06.441263    1555 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0925 03:34:06.441326    1555 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-183000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0925 03:34:06.679555    1555 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0925 03:34:06.679622    1555 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-183000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0925 03:34:06.780717    1555 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0925 03:34:06.934557    1555 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0925 03:34:07.004571    1555 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0925 03:34:07.004599    1555 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0925 03:34:07.096444    1555 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0925 03:34:07.197087    1555 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0925 03:34:07.295019    1555 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0925 03:34:07.459088    1555 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0925 03:34:07.459841    1555 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0925 03:34:07.461016    1555 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0925 03:34:07.464311    1555 out.go:204]   - Booting up control plane ...
	I0925 03:34:07.464429    1555 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0925 03:34:07.464523    1555 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0925 03:34:07.464562    1555 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0925 03:34:07.468573    1555 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0925 03:34:07.468914    1555 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0925 03:34:07.468980    1555 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0925 03:34:07.551081    1555 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0925 03:34:11.552205    1555 kubeadm.go:322] [apiclient] All control plane components are healthy after 4.001307 seconds
	I0925 03:34:11.552277    1555 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0925 03:34:11.558090    1555 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0925 03:34:12.066492    1555 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0925 03:34:12.066604    1555 kubeadm.go:322] [mark-control-plane] Marking the node addons-183000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0925 03:34:12.571455    1555 kubeadm.go:322] [bootstrap-token] Using token: dcud0i.8u8422zl7jahtpxe
	I0925 03:34:12.577836    1555 out.go:204]   - Configuring RBAC rules ...
	I0925 03:34:12.577916    1555 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0925 03:34:12.580042    1555 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0925 03:34:12.583046    1555 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0925 03:34:12.584193    1555 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0925 03:34:12.585457    1555 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0925 03:34:12.586636    1555 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0925 03:34:12.592832    1555 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0925 03:34:12.757427    1555 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0925 03:34:12.982058    1555 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0925 03:34:12.982629    1555 kubeadm.go:322] 
	I0925 03:34:12.982664    1555 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0925 03:34:12.982667    1555 kubeadm.go:322] 
	I0925 03:34:12.982715    1555 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0925 03:34:12.982721    1555 kubeadm.go:322] 
	I0925 03:34:12.982735    1555 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0925 03:34:12.982762    1555 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0925 03:34:12.982824    1555 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0925 03:34:12.982828    1555 kubeadm.go:322] 
	I0925 03:34:12.982852    1555 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0925 03:34:12.982856    1555 kubeadm.go:322] 
	I0925 03:34:12.982895    1555 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0925 03:34:12.982898    1555 kubeadm.go:322] 
	I0925 03:34:12.982927    1555 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0925 03:34:12.982998    1555 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0925 03:34:12.983041    1555 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0925 03:34:12.983046    1555 kubeadm.go:322] 
	I0925 03:34:12.983087    1555 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0925 03:34:12.983123    1555 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0925 03:34:12.983125    1555 kubeadm.go:322] 
	I0925 03:34:12.983172    1555 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token dcud0i.8u8422zl7jahtpxe \
	I0925 03:34:12.983225    1555 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:3fc5fb926713648f8638ba10da0d4f45584d32929bcc07af5ada491c000ad47e \
	I0925 03:34:12.983240    1555 kubeadm.go:322] 	--control-plane 
	I0925 03:34:12.983242    1555 kubeadm.go:322] 
	I0925 03:34:12.983281    1555 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0925 03:34:12.983285    1555 kubeadm.go:322] 
	I0925 03:34:12.983328    1555 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token dcud0i.8u8422zl7jahtpxe \
	I0925 03:34:12.983387    1555 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:3fc5fb926713648f8638ba10da0d4f45584d32929bcc07af5ada491c000ad47e 
	I0925 03:34:12.983463    1555 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0925 03:34:12.983472    1555 cni.go:84] Creating CNI manager for ""
	I0925 03:34:12.983479    1555 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0925 03:34:12.992098    1555 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0925 03:34:12.995235    1555 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0925 03:34:12.999700    1555 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0925 03:34:13.004656    1555 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0925 03:34:13.004755    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=1bf6c3d5317028f348e55ea19d261973a6487d3c minikube.k8s.io/name=addons-183000 minikube.k8s.io/updated_at=2023_09_25T03_34_13_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:13.004757    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:13.008164    1555 ops.go:34] apiserver oom_adj: -16
	I0925 03:34:13.063625    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:13.095139    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:13.629666    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:14.129649    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:14.629662    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:15.129655    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:15.629628    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:16.129723    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:16.629660    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:17.129683    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:17.629643    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:18.129619    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:18.629638    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:19.129594    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:19.629589    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:20.129625    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:20.629540    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:21.129598    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:21.629573    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:22.129550    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:22.629493    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:23.129517    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:23.629511    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:24.129464    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:24.629448    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:25.129565    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:25.629529    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:26.129496    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:26.629436    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:26.667079    1555 kubeadm.go:1081] duration metric: took 13.662618083s to wait for elevateKubeSystemPrivileges.
	I0925 03:34:26.667097    1555 kubeadm.go:406] StartCluster complete in 21.003673917s
	I0925 03:34:26.667106    1555 settings.go:142] acquiring lock: {Name:mkb5a0822179f07ef9369c44aa9b64eb9ef74eed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 03:34:26.667266    1555 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/17297-1010/kubeconfig
	I0925 03:34:26.667431    1555 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17297-1010/kubeconfig: {Name:mkaa9d09ca2bf27c1a43efc9acf938adcc68343d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 03:34:26.667677    1555 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0925 03:34:26.667722    1555 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:true]
	I0925 03:34:26.667779    1555 addons.go:69] Setting volumesnapshots=true in profile "addons-183000"
	I0925 03:34:26.667782    1555 addons.go:69] Setting cloud-spanner=true in profile "addons-183000"
	I0925 03:34:26.667785    1555 addons.go:231] Setting addon volumesnapshots=true in "addons-183000"
	I0925 03:34:26.667789    1555 addons.go:231] Setting addon cloud-spanner=true in "addons-183000"
	I0925 03:34:26.667790    1555 addons.go:69] Setting default-storageclass=true in profile "addons-183000"
	I0925 03:34:26.667799    1555 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-183000"
	I0925 03:34:26.667820    1555 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-183000"
	I0925 03:34:26.667848    1555 host.go:66] Checking if "addons-183000" exists ...
	I0925 03:34:26.667850    1555 addons.go:69] Setting registry=true in profile "addons-183000"
	I0925 03:34:26.667858    1555 addons.go:231] Setting addon registry=true in "addons-183000"
	I0925 03:34:26.667849    1555 addons.go:69] Setting metrics-server=true in profile "addons-183000"
	I0925 03:34:26.667873    1555 addons.go:231] Setting addon metrics-server=true in "addons-183000"
	I0925 03:34:26.667880    1555 host.go:66] Checking if "addons-183000" exists ...
	I0925 03:34:26.667881    1555 addons.go:69] Setting gcp-auth=true in profile "addons-183000"
	I0925 03:34:26.667902    1555 mustload.go:65] Loading cluster: addons-183000
	I0925 03:34:26.667915    1555 host.go:66] Checking if "addons-183000" exists ...
	I0925 03:34:26.667948    1555 host.go:66] Checking if "addons-183000" exists ...
	I0925 03:34:26.667977    1555 config.go:182] Loaded profile config "addons-183000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0925 03:34:26.668033    1555 addons.go:69] Setting ingress-dns=true in profile "addons-183000"
	I0925 03:34:26.668035    1555 addons.go:69] Setting inspektor-gadget=true in profile "addons-183000"
	I0925 03:34:26.668042    1555 addons.go:69] Setting storage-provisioner=true in profile "addons-183000"
	I0925 03:34:26.668047    1555 addons.go:231] Setting addon storage-provisioner=true in "addons-183000"
	I0925 03:34:26.668049    1555 addons.go:231] Setting addon inspektor-gadget=true in "addons-183000"
	I0925 03:34:26.668059    1555 host.go:66] Checking if "addons-183000" exists ...
	I0925 03:34:26.668076    1555 host.go:66] Checking if "addons-183000" exists ...
	W0925 03:34:26.668189    1555 host.go:54] host status for "addons-183000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/monitor: connect: connection refused
	W0925 03:34:26.668197    1555 addons.go:277] "addons-183000" is not running, setting volumesnapshots=true and skipping enablement (err=<nil>)
	I0925 03:34:26.667780    1555 addons.go:69] Setting ingress=true in profile "addons-183000"
	I0925 03:34:26.668202    1555 addons.go:231] Setting addon ingress=true in "addons-183000"
	I0925 03:34:26.668215    1555 host.go:66] Checking if "addons-183000" exists ...
	W0925 03:34:26.668271    1555 host.go:54] host status for "addons-183000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/monitor: connect: connection refused
	W0925 03:34:26.668277    1555 addons.go:277] "addons-183000" is not running, setting storage-provisioner=true and skipping enablement (err=<nil>)
	I0925 03:34:26.668038    1555 addons.go:231] Setting addon ingress-dns=true in "addons-183000"
	I0925 03:34:26.668289    1555 host.go:66] Checking if "addons-183000" exists ...
	I0925 03:34:26.667873    1555 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-183000"
	I0925 03:34:26.668351    1555 host.go:66] Checking if "addons-183000" exists ...
	W0925 03:34:26.668420    1555 host.go:54] host status for "addons-183000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/monitor: connect: connection refused
	W0925 03:34:26.668426    1555 addons.go:277] "addons-183000" is not running, setting ingress=true and skipping enablement (err=<nil>)
	I0925 03:34:26.668428    1555 addons.go:467] Verifying addon ingress=true in "addons-183000"
	I0925 03:34:26.671815    1555 out.go:177] * Verifying ingress addon...
	I0925 03:34:26.668077    1555 config.go:182] Loaded profile config "addons-183000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	W0925 03:34:26.668443    1555 host.go:54] host status for "addons-183000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/monitor: connect: connection refused
	W0925 03:34:26.668492    1555 host.go:54] host status for "addons-183000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/monitor: connect: connection refused
	W0925 03:34:26.668560    1555 host.go:54] host status for "addons-183000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/monitor: connect: connection refused
	W0925 03:34:26.668562    1555 host.go:54] host status for "addons-183000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/monitor: connect: connection refused
	W0925 03:34:26.668565    1555 host.go:54] host status for "addons-183000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/monitor: connect: connection refused
	I0925 03:34:26.674660    1555 addons.go:231] Setting addon default-storageclass=true in "addons-183000"
	W0925 03:34:26.679882    1555 addons.go:277] "addons-183000" is not running, setting ingress-dns=true and skipping enablement (err=<nil>)
	W0925 03:34:26.679903    1555 addons.go:277] "addons-183000" is not running, setting csi-hostpath-driver=true and skipping enablement (err=<nil>)
	W0925 03:34:26.679909    1555 addons.go:277] "addons-183000" is not running, setting cloud-spanner=true and skipping enablement (err=<nil>)
	W0925 03:34:26.679910    1555 addons.go:277] "addons-183000" is not running, setting metrics-server=true and skipping enablement (err=<nil>)
	W0925 03:34:26.679914    1555 addons.go:277] "addons-183000" is not running, setting registry=true and skipping enablement (err=<nil>)
	I0925 03:34:26.680408    1555 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0925 03:34:26.680587    1555 host.go:66] Checking if "addons-183000" exists ...
	I0925 03:34:26.685884    1555 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-183000"
	I0925 03:34:26.691851    1555 out.go:177] * Verifying csi-hostpath-driver addon...
	I0925 03:34:26.685950    1555 addons.go:467] Verifying addon metrics-server=true in "addons-183000"
	I0925 03:34:26.685956    1555 addons.go:467] Verifying addon registry=true in "addons-183000"
	I0925 03:34:26.685976    1555 host.go:66] Checking if "addons-183000" exists ...
	I0925 03:34:26.685980    1555 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.20.0
	I0925 03:34:26.693878    1555 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0925 03:34:26.696802    1555 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-183000" context rescaled to 1 replicas
	I0925 03:34:26.698859    1555 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 03:34:26.700109    1555 out.go:177] * Verifying Kubernetes components...
	I0925 03:34:26.699453    1555 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0925 03:34:26.699742    1555 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0925 03:34:26.709918    1555 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0925 03:34:26.713912    1555 out.go:177] * Verifying registry addon...
	I0925 03:34:26.717867    1555 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0925 03:34:26.720802    1555 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/id_rsa Username:docker}
	I0925 03:34:26.717891    1555 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0925 03:34:26.720819    1555 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0925 03:34:26.720825    1555 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/id_rsa Username:docker}
	I0925 03:34:26.721266    1555 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0925 03:34:26.726699    1555 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=registry
	I0925 03:34:26.728776    1555 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0925 03:34:26.751434    1555 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0925 03:34:26.751798    1555 node_ready.go:35] waiting up to 6m0s for node "addons-183000" to be "Ready" ...
	I0925 03:34:26.753298    1555 node_ready.go:49] node "addons-183000" has status "Ready":"True"
	I0925 03:34:26.753320    1555 node_ready.go:38] duration metric: took 1.500542ms waiting for node "addons-183000" to be "Ready" ...
	I0925 03:34:26.753326    1555 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0925 03:34:26.756603    1555 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-nj9v5" in "kube-system" namespace to be "Ready" ...
	I0925 03:34:26.894346    1555 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0925 03:34:26.894357    1555 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0925 03:34:26.894362    1555 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0925 03:34:26.913613    1555 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I0925 03:34:26.913623    1555 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0925 03:34:26.955544    1555 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0925 03:34:26.955558    1555 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0925 03:34:26.966254    1555 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0925 03:34:26.966263    1555 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0925 03:34:26.970978    1555 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0925 03:34:26.970984    1555 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0925 03:34:26.980045    1555 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I0925 03:34:26.980056    1555 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0925 03:34:27.011877    1555 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0925 03:34:27.011886    1555 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7741 bytes)
	I0925 03:34:27.035496    1555 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0925 03:34:27.284243    1555 start.go:923] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I0925 03:34:28.770683    1555 pod_ready.go:102] pod "coredns-5dd5756b68-nj9v5" in "kube-system" namespace has status "Ready":"False"
	I0925 03:34:30.771066    1555 pod_ready.go:102] pod "coredns-5dd5756b68-nj9v5" in "kube-system" namespace has status "Ready":"False"
	I0925 03:34:33.271406    1555 pod_ready.go:102] pod "coredns-5dd5756b68-nj9v5" in "kube-system" namespace has status "Ready":"False"
	I0925 03:34:33.290034    1555 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0925 03:34:33.290047    1555 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/id_rsa Username:docker}
	I0925 03:34:33.333376    1555 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0925 03:34:33.340520    1555 addons.go:231] Setting addon gcp-auth=true in "addons-183000"
	I0925 03:34:33.340540    1555 host.go:66] Checking if "addons-183000" exists ...
	I0925 03:34:33.341291    1555 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0925 03:34:33.341299    1555 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/id_rsa Username:docker}
	I0925 03:34:33.385047    1555 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0925 03:34:33.390017    1555 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0925 03:34:33.393078    1555 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0925 03:34:33.393083    1555 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0925 03:34:33.401443    1555 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0925 03:34:33.401449    1555 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0925 03:34:33.408814    1555 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0925 03:34:33.408821    1555 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5412 bytes)
	I0925 03:34:33.415868    1555 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0925 03:34:33.956480    1555 addons.go:467] Verifying addon gcp-auth=true in "addons-183000"
	I0925 03:34:33.962940    1555 out.go:177] * Verifying gcp-auth addon...
	I0925 03:34:33.970267    1555 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0925 03:34:33.972814    1555 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0925 03:34:33.972821    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:33.975859    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:34.479146    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:34.978976    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:35.477962    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:35.770777    1555 pod_ready.go:102] pod "coredns-5dd5756b68-nj9v5" in "kube-system" namespace has status "Ready":"False"
	I0925 03:34:35.978841    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:36.478564    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:36.978738    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:37.478896    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:37.978838    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:38.273778    1555 pod_ready.go:102] pod "coredns-5dd5756b68-nj9v5" in "kube-system" namespace has status "Ready":"False"
	I0925 03:34:38.478811    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:38.770881    1555 pod_ready.go:92] pod "coredns-5dd5756b68-nj9v5" in "kube-system" namespace has status "Ready":"True"
	I0925 03:34:38.770889    1555 pod_ready.go:81] duration metric: took 12.014493833s waiting for pod "coredns-5dd5756b68-nj9v5" in "kube-system" namespace to be "Ready" ...
	I0925 03:34:38.770893    1555 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-183000" in "kube-system" namespace to be "Ready" ...
	I0925 03:34:38.773593    1555 pod_ready.go:92] pod "etcd-addons-183000" in "kube-system" namespace has status "Ready":"True"
	I0925 03:34:38.773599    1555 pod_ready.go:81] duration metric: took 2.702459ms waiting for pod "etcd-addons-183000" in "kube-system" namespace to be "Ready" ...
	I0925 03:34:38.773602    1555 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-183000" in "kube-system" namespace to be "Ready" ...
	I0925 03:34:38.775799    1555 pod_ready.go:92] pod "kube-apiserver-addons-183000" in "kube-system" namespace has status "Ready":"True"
	I0925 03:34:38.775804    1555 pod_ready.go:81] duration metric: took 2.198875ms waiting for pod "kube-apiserver-addons-183000" in "kube-system" namespace to be "Ready" ...
	I0925 03:34:38.775808    1555 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-183000" in "kube-system" namespace to be "Ready" ...
	I0925 03:34:38.777922    1555 pod_ready.go:92] pod "kube-controller-manager-addons-183000" in "kube-system" namespace has status "Ready":"True"
	I0925 03:34:38.777929    1555 pod_ready.go:81] duration metric: took 2.118625ms waiting for pod "kube-controller-manager-addons-183000" in "kube-system" namespace to be "Ready" ...
	I0925 03:34:38.777933    1555 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7t7bh" in "kube-system" namespace to be "Ready" ...
	I0925 03:34:38.780129    1555 pod_ready.go:92] pod "kube-proxy-7t7bh" in "kube-system" namespace has status "Ready":"True"
	I0925 03:34:38.780136    1555 pod_ready.go:81] duration metric: took 2.199875ms waiting for pod "kube-proxy-7t7bh" in "kube-system" namespace to be "Ready" ...
	I0925 03:34:38.780139    1555 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-183000" in "kube-system" namespace to be "Ready" ...
	I0925 03:34:38.977389    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:39.173086    1555 pod_ready.go:92] pod "kube-scheduler-addons-183000" in "kube-system" namespace has status "Ready":"True"
	I0925 03:34:39.173096    1555 pod_ready.go:81] duration metric: took 392.960166ms waiting for pod "kube-scheduler-addons-183000" in "kube-system" namespace to be "Ready" ...
	I0925 03:34:39.173100    1555 pod_ready.go:38] duration metric: took 12.419997458s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0925 03:34:39.173111    1555 api_server.go:52] waiting for apiserver process to appear ...
	I0925 03:34:39.173181    1555 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0925 03:34:39.178068    1555 api_server.go:72] duration metric: took 12.479424625s to wait for apiserver process to appear ...
	I0925 03:34:39.178075    1555 api_server.go:88] waiting for apiserver healthz status ...
	I0925 03:34:39.178081    1555 api_server.go:253] Checking apiserver healthz at https://192.168.105.2:8443/healthz ...
	I0925 03:34:39.182471    1555 api_server.go:279] https://192.168.105.2:8443/healthz returned 200:
	ok
	I0925 03:34:39.183204    1555 api_server.go:141] control plane version: v1.28.2
	I0925 03:34:39.183210    1555 api_server.go:131] duration metric: took 5.132042ms to wait for apiserver health ...
	I0925 03:34:39.183213    1555 system_pods.go:43] waiting for kube-system pods to appear ...
	I0925 03:34:39.372354    1555 system_pods.go:59] 6 kube-system pods found
	I0925 03:34:39.372365    1555 system_pods.go:61] "coredns-5dd5756b68-nj9v5" [b1bb0e62-0339-479f-9572-1e07ab015a1d] Running
	I0925 03:34:39.372368    1555 system_pods.go:61] "etcd-addons-183000" [98901ac9-8165-4fad-b6a6-6c757da8e783] Running
	I0925 03:34:39.372371    1555 system_pods.go:61] "kube-apiserver-addons-183000" [b3899bc1-2055-47fb-aded-8cc3e5ca8b22] Running
	I0925 03:34:39.372373    1555 system_pods.go:61] "kube-controller-manager-addons-183000" [12803b97-0e90-4869-a114-2dce351af701] Running
	I0925 03:34:39.372376    1555 system_pods.go:61] "kube-proxy-7t7bh" [b51c70db-a512-4aae-af91-8b45e6ce9f89] Running
	I0925 03:34:39.372378    1555 system_pods.go:61] "kube-scheduler-addons-183000" [543428f6-b6ce-448c-9d3e-48c775396c75] Running
	I0925 03:34:39.372382    1555 system_pods.go:74] duration metric: took 189.166917ms to wait for pod list to return data ...
	I0925 03:34:39.372386    1555 default_sa.go:34] waiting for default service account to be created ...
	I0925 03:34:39.478483    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:39.569942    1555 default_sa.go:45] found service account: "default"
	I0925 03:34:39.569952    1555 default_sa.go:55] duration metric: took 197.566292ms for default service account to be created ...
	I0925 03:34:39.569955    1555 system_pods.go:116] waiting for k8s-apps to be running ...
	I0925 03:34:39.771555    1555 system_pods.go:86] 6 kube-system pods found
	I0925 03:34:39.771566    1555 system_pods.go:89] "coredns-5dd5756b68-nj9v5" [b1bb0e62-0339-479f-9572-1e07ab015a1d] Running
	I0925 03:34:39.771569    1555 system_pods.go:89] "etcd-addons-183000" [98901ac9-8165-4fad-b6a6-6c757da8e783] Running
	I0925 03:34:39.771571    1555 system_pods.go:89] "kube-apiserver-addons-183000" [b3899bc1-2055-47fb-aded-8cc3e5ca8b22] Running
	I0925 03:34:39.771573    1555 system_pods.go:89] "kube-controller-manager-addons-183000" [12803b97-0e90-4869-a114-2dce351af701] Running
	I0925 03:34:39.771576    1555 system_pods.go:89] "kube-proxy-7t7bh" [b51c70db-a512-4aae-af91-8b45e6ce9f89] Running
	I0925 03:34:39.771579    1555 system_pods.go:89] "kube-scheduler-addons-183000" [543428f6-b6ce-448c-9d3e-48c775396c75] Running
	I0925 03:34:39.771582    1555 system_pods.go:126] duration metric: took 201.627792ms to wait for k8s-apps to be running ...
	I0925 03:34:39.771585    1555 system_svc.go:44] waiting for kubelet service to be running ....
	I0925 03:34:39.771649    1555 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0925 03:34:39.777059    1555 system_svc.go:56] duration metric: took 5.471834ms WaitForService to wait for kubelet.
	I0925 03:34:39.777072    1555 kubeadm.go:581] duration metric: took 13.078440792s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0925 03:34:39.777081    1555 node_conditions.go:102] verifying NodePressure condition ...
	I0925 03:34:39.970496    1555 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0925 03:34:39.970507    1555 node_conditions.go:123] node cpu capacity is 2
	I0925 03:34:39.970512    1555 node_conditions.go:105] duration metric: took 193.43225ms to run NodePressure ...
	I0925 03:34:39.970518    1555 start.go:228] waiting for startup goroutines ...
	I0925 03:34:39.977869    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:40.478718    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:40.978494    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:41.478330    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:41.978723    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:42.478484    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:42.978499    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:43.478310    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:43.978560    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:44.478626    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:44.978747    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:45.478652    1555 kapi.go:107] duration metric: took 11.508592542s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0925 03:34:45.482917    1555 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-183000 cluster.
	I0925 03:34:45.486908    1555 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0925 03:34:45.489839    1555 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0925 03:40:26.681420    1555 kapi.go:107] duration metric: took 6m0.007630792s to wait for app.kubernetes.io/name=ingress-nginx ...
	W0925 03:40:26.681519    1555 out.go:239] ! Enabling 'ingress' returned an error: running callbacks: [waiting for app.kubernetes.io/name=ingress-nginx pods: context deadline exceeded]
	I0925 03:40:26.713271    1555 kapi.go:107] duration metric: took 6m0.020443166s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	W0925 03:40:26.713301    1555 out.go:239] ! Enabling 'csi-hostpath-driver' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=csi-hostpath-driver pods: context deadline exceeded]
	I0925 03:40:26.715027    1555 kapi.go:107] duration metric: took 6m0.000386167s to wait for kubernetes.io/minikube-addons=registry ...
	W0925 03:40:26.715058    1555 out.go:239] ! Enabling 'registry' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=registry pods: context deadline exceeded]
	I0925 03:40:26.720408    1555 out.go:177] * Enabled addons: volumesnapshots, storage-provisioner, cloud-spanner, ingress-dns, metrics-server, default-storageclass, inspektor-gadget, gcp-auth
	I0925 03:40:26.729284    1555 addons.go:502] enable addons completed in 6m0.068199458s: enabled=[volumesnapshots storage-provisioner cloud-spanner ingress-dns metrics-server default-storageclass inspektor-gadget gcp-auth]
	I0925 03:40:26.729295    1555 start.go:233] waiting for cluster config update ...
	I0925 03:40:26.729300    1555 start.go:242] writing updated cluster config ...
	I0925 03:40:26.729761    1555 ssh_runner.go:195] Run: rm -f paused
	I0925 03:40:26.760421    1555 start.go:600] kubectl: 1.27.2, cluster: 1.28.2 (minor skew: 1)
	I0925 03:40:26.764251    1555 out.go:177] * Done! kubectl is now configured to use "addons-183000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Mon 2023-09-25 10:33:55 UTC, ends at Mon 2023-09-25 11:06:04 UTC. --
	Sep 25 10:34:42 addons-183000 dockerd[1111]: time="2023-09-25T10:34:42.404325086Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 25 10:34:44 addons-183000 cri-dockerd[998]: time="2023-09-25T10:34:44Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf: Status: Downloaded newer image for gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf"
	Sep 25 10:34:44 addons-183000 dockerd[1111]: time="2023-09-25T10:34:44.321936694Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 25 10:34:44 addons-183000 dockerd[1111]: time="2023-09-25T10:34:44.321971829Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 25 10:34:44 addons-183000 dockerd[1111]: time="2023-09-25T10:34:44.321982528Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 25 10:34:44 addons-183000 dockerd[1111]: time="2023-09-25T10:34:44.321989314Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 25 10:52:28 addons-183000 dockerd[1111]: time="2023-09-25T10:52:28.470595101Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 25 10:52:28 addons-183000 dockerd[1111]: time="2023-09-25T10:52:28.470647350Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 25 10:52:28 addons-183000 dockerd[1111]: time="2023-09-25T10:52:28.470663058Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 25 10:52:28 addons-183000 dockerd[1111]: time="2023-09-25T10:52:28.470673850Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 25 10:52:28 addons-183000 cri-dockerd[998]: time="2023-09-25T10:52:28Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/df99a16ef61333f49304447de1f31c9677e9243b43dae14dfba57e8a2aeeb1be/resolv.conf as [nameserver 10.96.0.10 search headlamp.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Sep 25 10:52:28 addons-183000 dockerd[1105]: time="2023-09-25T10:52:28.813334559Z" level=warning msg="reference for unknown type: " digest="sha256:bb15916c96306cd14f1c9c09c639d01d1d1fb854fd770bf99f3e7a9deb584753" remote="ghcr.io/headlamp-k8s/headlamp@sha256:bb15916c96306cd14f1c9c09c639d01d1d1fb854fd770bf99f3e7a9deb584753"
	Sep 25 10:52:33 addons-183000 cri-dockerd[998]: time="2023-09-25T10:52:33Z" level=info msg="Stop pulling image ghcr.io/headlamp-k8s/headlamp:v0.19.1@sha256:bb15916c96306cd14f1c9c09c639d01d1d1fb854fd770bf99f3e7a9deb584753: Status: Downloaded newer image for ghcr.io/headlamp-k8s/headlamp@sha256:bb15916c96306cd14f1c9c09c639d01d1d1fb854fd770bf99f3e7a9deb584753"
	Sep 25 10:52:34 addons-183000 dockerd[1111]: time="2023-09-25T10:52:34.018791826Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 25 10:52:34 addons-183000 dockerd[1111]: time="2023-09-25T10:52:34.018844700Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 25 10:52:34 addons-183000 dockerd[1111]: time="2023-09-25T10:52:34.018856825Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 25 10:52:34 addons-183000 dockerd[1111]: time="2023-09-25T10:52:34.018863408Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 25 11:04:46 addons-183000 dockerd[1111]: time="2023-09-25T11:04:46.234281937Z" level=info msg="shim disconnected" id=3214d7d3645b319f499cd4e473783f22248ab293d0b7bd09221747894dd5ebed namespace=moby
	Sep 25 11:04:46 addons-183000 dockerd[1111]: time="2023-09-25T11:04:46.234315062Z" level=warning msg="cleaning up after shim disconnected" id=3214d7d3645b319f499cd4e473783f22248ab293d0b7bd09221747894dd5ebed namespace=moby
	Sep 25 11:04:46 addons-183000 dockerd[1111]: time="2023-09-25T11:04:46.234319604Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 25 11:04:46 addons-183000 dockerd[1105]: time="2023-09-25T11:04:46.234527935Z" level=info msg="ignoring event" container=3214d7d3645b319f499cd4e473783f22248ab293d0b7bd09221747894dd5ebed module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 25 11:04:46 addons-183000 dockerd[1105]: time="2023-09-25T11:04:46.264129930Z" level=info msg="ignoring event" container=1f38ec635c03d87bfa52e9a8918af2011a604df8d3e7dc5113f3e662ce6bb608 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 25 11:04:46 addons-183000 dockerd[1111]: time="2023-09-25T11:04:46.264758631Z" level=info msg="shim disconnected" id=1f38ec635c03d87bfa52e9a8918af2011a604df8d3e7dc5113f3e662ce6bb608 namespace=moby
	Sep 25 11:04:46 addons-183000 dockerd[1111]: time="2023-09-25T11:04:46.264787964Z" level=warning msg="cleaning up after shim disconnected" id=1f38ec635c03d87bfa52e9a8918af2011a604df8d3e7dc5113f3e662ce6bb608 namespace=moby
	Sep 25 11:04:46 addons-183000 dockerd[1111]: time="2023-09-25T11:04:46.264792755Z" level=info msg="cleaning up dead shim" namespace=moby
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                          CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d5793dcd01c69       ghcr.io/headlamp-k8s/headlamp@sha256:bb15916c96306cd14f1c9c09c639d01d1d1fb854fd770bf99f3e7a9deb584753          13 minutes ago      Running             headlamp                  0                   df99a16ef6133       headlamp-58b88cff49-kdgv2
	f0ceeef2fd99f       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf   31 minutes ago      Running             gcp-auth                  0                   217fc96b3ae84       gcp-auth-d4c87556c-fgkgk
	09ae8580d310e       97e04611ad434                                                                                                  31 minutes ago      Running             coredns                   0                   9802832060d13       coredns-5dd5756b68-nj9v5
	fff72387d957b       7da62c127fc0f                                                                                                  31 minutes ago      Running             kube-proxy                0                   2514b88f9fbec       kube-proxy-7t7bh
	e24563a552742       89d57b83c1786                                                                                                  31 minutes ago      Running             kube-controller-manager   0                   7170972f2383c       kube-controller-manager-addons-183000
	e38f0c6d58f79       30bb499447fe1                                                                                                  31 minutes ago      Running             kube-apiserver            0                   e3ec8dad501d8       kube-apiserver-addons-183000
	202a7fdac8250       9cdd6470f48c8                                                                                                  31 minutes ago      Running             etcd                      0                   f07db97eda3c5       etcd-addons-183000
	5a87dfcd0e1a4       64fc40cee3716                                                                                                  31 minutes ago      Running             kube-scheduler            0                   88f62df9ef878       kube-scheduler-addons-183000
	
	* 
	* ==> coredns [09ae8580d310] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:53855 - 12762 "HINFO IN 6175233926506353361.1980247959579836404. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.004134462s
	[INFO] 10.244.0.5:53045 - 37584 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000106198s
	[INFO] 10.244.0.5:58309 - 60928 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000170558s
	[INFO] 10.244.0.5:51843 - 23622 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000213104s
	[INFO] 10.244.0.5:42760 - 58990 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000042504s
	[INFO] 10.244.0.5:51340 - 46119 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00004929s
	[INFO] 10.244.0.5:39848 - 8379 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000023105s
	[INFO] 10.244.0.5:32887 - 31577 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001136668s
	[INFO] 10.244.0.5:49269 - 43084 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 458 0.001085546s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-183000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-183000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1bf6c3d5317028f348e55ea19d261973a6487d3c
	                    minikube.k8s.io/name=addons-183000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_25T03_34_13_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 25 Sep 2023 10:34:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-183000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 25 Sep 2023 11:06:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 25 Sep 2023 11:02:55 +0000   Mon, 25 Sep 2023 10:34:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 25 Sep 2023 11:02:55 +0000   Mon, 25 Sep 2023 10:34:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 25 Sep 2023 11:02:55 +0000   Mon, 25 Sep 2023 10:34:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 25 Sep 2023 11:02:55 +0000   Mon, 25 Sep 2023 10:34:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.2
	  Hostname:    addons-183000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	System Info:
	  Machine ID:                 3ec93b0c295a46b69f667e92919bae36
	  System UUID:                3ec93b0c295a46b69f667e92919bae36
	  Boot ID:                    e140f335-14d6-4d36-af6f-4c16a72ee860
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.6
	  Kubelet Version:            v1.28.2
	  Kube-Proxy Version:         v1.28.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  gcp-auth                    gcp-auth-d4c87556c-fgkgk                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         31m
	  headlamp                    headlamp-58b88cff49-kdgv2                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 coredns-5dd5756b68-nj9v5                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     31m
	  kube-system                 etcd-addons-183000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         31m
	  kube-system                 kube-apiserver-addons-183000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         31m
	  kube-system                 kube-controller-manager-addons-183000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         31m
	  kube-system                 kube-proxy-7t7bh                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         31m
	  kube-system                 kube-scheduler-addons-183000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         31m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 31m   kube-proxy       
	  Normal  Starting                 31m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  31m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  31m   kubelet          Node addons-183000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    31m   kubelet          Node addons-183000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     31m   kubelet          Node addons-183000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                31m   kubelet          Node addons-183000 status is now: NodeReady
	  Normal  RegisteredNode           31m   node-controller  Node addons-183000 event: Registered Node addons-183000 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.641440] EINJ: EINJ table not found.
	[  +0.489201] systemd-fstab-generator[117]: Ignoring "noauto" for root device
	[  +0.043090] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000792] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +3.110509] systemd-fstab-generator[482]: Ignoring "noauto" for root device
	[  +0.074666] systemd-fstab-generator[494]: Ignoring "noauto" for root device
	[  +0.418795] systemd-fstab-generator[667]: Ignoring "noauto" for root device
	[  +0.183648] systemd-fstab-generator[704]: Ignoring "noauto" for root device
	[  +0.073331] systemd-fstab-generator[715]: Ignoring "noauto" for root device
	[  +0.088908] systemd-fstab-generator[728]: Ignoring "noauto" for root device
	[  +1.149460] kauditd_printk_skb: 17 callbacks suppressed
	[  +0.104006] systemd-fstab-generator[917]: Ignoring "noauto" for root device
	[  +0.078468] systemd-fstab-generator[928]: Ignoring "noauto" for root device
	[  +0.058376] systemd-fstab-generator[939]: Ignoring "noauto" for root device
	[  +0.070842] systemd-fstab-generator[950]: Ignoring "noauto" for root device
	[  +0.085054] systemd-fstab-generator[991]: Ignoring "noauto" for root device
	[Sep25 10:34] systemd-fstab-generator[1098]: Ignoring "noauto" for root device
	[  +2.191489] kauditd_printk_skb: 29 callbacks suppressed
	[  +2.399489] systemd-fstab-generator[1471]: Ignoring "noauto" for root device
	[  +5.122490] systemd-fstab-generator[2347]: Ignoring "noauto" for root device
	[ +14.463207] kauditd_printk_skb: 41 callbacks suppressed
	[  +6.798894] kauditd_printk_skb: 21 callbacks suppressed
	[  +4.810513] TCP: eth0: Driver has suspect GRO implementation, TCP performance may be compromised.
	[  +3.498700] kauditd_printk_skb: 12 callbacks suppressed
	[Sep25 10:52] kauditd_printk_skb: 5 callbacks suppressed
	
	* 
	* ==> etcd [202a7fdac825] <==
	* {"level":"info","ts":"2023-09-25T10:34:31.937317Z","caller":"traceutil/trace.go:171","msg":"trace[667548922] transaction","detail":"{read_only:false; response_revision:416; number_of_response:1; }","duration":"126.937574ms","start":"2023-09-25T10:34:31.810371Z","end":"2023-09-25T10:34:31.937309Z","steps":["trace[667548922] 'process raft request'  (duration: 126.824018ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-25T10:34:36.99882Z","caller":"traceutil/trace.go:171","msg":"trace[243510449] linearizableReadLoop","detail":"{readStateIndex:482; appliedIndex:481; }","duration":"165.982552ms","start":"2023-09-25T10:34:36.832829Z","end":"2023-09-25T10:34:36.998811Z","steps":["trace[243510449] 'read index received'  (duration: 165.770762ms)","trace[243510449] 'applied index is now lower than readState.Index'  (duration: 211.209µs)"],"step_count":2}
	{"level":"warn","ts":"2023-09-25T10:34:36.998969Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"166.151453ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2023-09-25T10:34:36.999019Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"123.796797ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-5dd5756b68-nj9v5\" ","response":"range_response_count:1 size:5002"}
	{"level":"info","ts":"2023-09-25T10:34:36.999045Z","caller":"traceutil/trace.go:171","msg":"trace[2057756314] range","detail":"{range_begin:/registry/pods/kube-system/coredns-5dd5756b68-nj9v5; range_end:; response_count:1; response_revision:469; }","duration":"123.811177ms","start":"2023-09-25T10:34:36.875219Z","end":"2023-09-25T10:34:36.99903Z","steps":["trace[2057756314] 'agreement among raft nodes before linearized reading'  (duration: 123.788776ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-25T10:34:36.999164Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"162.803156ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:6 size:31393"}
	{"level":"info","ts":"2023-09-25T10:34:36.999205Z","caller":"traceutil/trace.go:171","msg":"trace[1483278895] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:6; response_revision:469; }","duration":"162.834825ms","start":"2023-09-25T10:34:36.836356Z","end":"2023-09-25T10:34:36.99919Z","steps":["trace[1483278895] 'agreement among raft nodes before linearized reading'  (duration: 162.701625ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-25T10:34:36.999Z","caller":"traceutil/trace.go:171","msg":"trace[3634572] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:0; response_revision:469; }","duration":"166.183579ms","start":"2023-09-25T10:34:36.832812Z","end":"2023-09-25T10:34:36.998995Z","steps":["trace[3634572] 'agreement among raft nodes before linearized reading'  (duration: 166.053912ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-25T10:34:36.998947Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"163.574471ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:6 size:31393"}
	{"level":"info","ts":"2023-09-25T10:34:36.999285Z","caller":"traceutil/trace.go:171","msg":"trace[819315326] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:6; response_revision:469; }","duration":"163.922954ms","start":"2023-09-25T10:34:36.83536Z","end":"2023-09-25T10:34:36.999283Z","steps":["trace[819315326] 'agreement among raft nodes before linearized reading'  (duration: 163.541307ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-25T10:44:09.779305Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":608}
	{"level":"info","ts":"2023-09-25T10:44:09.779775Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":608,"took":"346.08µs","hash":977468107}
	{"level":"info","ts":"2023-09-25T10:44:09.779794Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":977468107,"revision":608,"compact-revision":-1}
	{"level":"info","ts":"2023-09-25T10:49:09.783821Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":698}
	{"level":"info","ts":"2023-09-25T10:49:09.784257Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":698,"took":"244.664µs","hash":3592134345}
	{"level":"info","ts":"2023-09-25T10:49:09.784273Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3592134345,"revision":698,"compact-revision":608}
	{"level":"info","ts":"2023-09-25T10:54:09.786046Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":788}
	{"level":"info","ts":"2023-09-25T10:54:09.786391Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":788,"took":"183.456µs","hash":2921777324}
	{"level":"info","ts":"2023-09-25T10:54:09.786401Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2921777324,"revision":788,"compact-revision":698}
	{"level":"info","ts":"2023-09-25T10:59:09.78846Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":921}
	{"level":"info","ts":"2023-09-25T10:59:09.788929Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":921,"took":"220.122µs","hash":3368749381}
	{"level":"info","ts":"2023-09-25T10:59:09.788942Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3368749381,"revision":921,"compact-revision":788}
	{"level":"info","ts":"2023-09-25T11:04:09.790636Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1015}
	{"level":"info","ts":"2023-09-25T11:04:09.790979Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1015,"took":"207.581µs","hash":1495489543}
	{"level":"info","ts":"2023-09-25T11:04:09.790991Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1495489543,"revision":1015,"compact-revision":921}
	
	* 
	* ==> gcp-auth [f0ceeef2fd99] <==
	* 2023/09/25 10:34:44 GCP Auth Webhook started!
	2023/09/25 10:52:28 Ready to marshal response ...
	2023/09/25 10:52:28 Ready to write response ...
	2023/09/25 10:52:28 Ready to marshal response ...
	2023/09/25 10:52:28 Ready to write response ...
	2023/09/25 10:52:28 Ready to marshal response ...
	2023/09/25 10:52:28 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  11:06:05 up 32 min,  0 users,  load average: 0.10, 0.10, 0.09
	Linux addons-183000 5.10.57 #1 SMP PREEMPT Mon Sep 18 20:10:16 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [e38f0c6d58f7] <==
	* I0925 10:34:11.481757       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0925 10:34:11.494186       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0925 10:34:11.536039       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0925 10:34:11.538110       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.105.2]
	I0925 10:34:11.538484       1 controller.go:624] quota admission added evaluator for: endpoints
	I0925 10:34:11.539858       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0925 10:34:12.380709       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0925 10:34:12.885080       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0925 10:34:12.893075       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0925 10:34:12.904077       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0925 10:34:26.498134       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0925 10:34:26.509156       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I0925 10:34:27.526494       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0925 10:34:34.002889       1 alloc.go:330] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.108.108.100"}
	I0925 10:34:34.022823       1 controller.go:624] quota admission added evaluator for: jobs.batch
	I0925 10:39:10.399639       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0925 10:44:10.399758       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0925 10:49:10.400399       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0925 10:52:28.085866       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.104.63.194"}
	I0925 10:54:10.400553       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0925 10:59:10.400827       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0925 11:04:10.400976       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0925 11:04:46.170548       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0925 11:04:46.172109       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0925 11:04:47.178215       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	
	* 
	* ==> kube-controller-manager [e24563a55274] <==
	* W0925 11:04:51.163542       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0925 11:04:51.163590       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0925 11:04:55.706231       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0925 11:04:55.706246       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0925 11:04:56.201466       1 namespace_controller.go:182] "Namespace has been deleted" namespace="gadget"
	E0925 11:04:56.530313       1 pv_controller.go:1562] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found" PVC="default/hpvc"
	I0925 11:04:56.530378       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found"
	I0925 11:04:56.954060       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0925 11:04:56.954078       1 shared_informer.go:318] Caches are synced for resource quota
	I0925 11:04:57.174724       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0925 11:04:57.174742       1 shared_informer.go:318] Caches are synced for garbage collector
	W0925 11:05:03.058914       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0925 11:05:03.058934       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0925 11:05:11.530674       1 pv_controller.go:1562] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found" PVC="default/hpvc"
	I0925 11:05:11.530726       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found"
	W0925 11:05:17.575329       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0925 11:05:17.575344       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0925 11:05:26.531205       1 pv_controller.go:1562] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found" PVC="default/hpvc"
	I0925 11:05:26.531270       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found"
	E0925 11:05:41.531912       1 pv_controller.go:1562] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found" PVC="default/hpvc"
	I0925 11:05:41.531980       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found"
	W0925 11:05:47.769318       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0925 11:05:47.769352       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0925 11:05:56.532848       1 pv_controller.go:1562] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found" PVC="default/hpvc"
	I0925 11:05:56.532998       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found"
	
	* 
	* ==> kube-proxy [fff72387d957] <==
	* I0925 10:34:27.163880       1 server_others.go:69] "Using iptables proxy"
	I0925 10:34:27.181208       1 node.go:141] Successfully retrieved node IP: 192.168.105.2
	I0925 10:34:27.228178       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0925 10:34:27.228201       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0925 10:34:27.231917       1 server_others.go:152] "Using iptables Proxier"
	I0925 10:34:27.231983       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0925 10:34:27.232100       1 server.go:846] "Version info" version="v1.28.2"
	I0925 10:34:27.232211       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0925 10:34:27.232663       1 config.go:188] "Starting service config controller"
	I0925 10:34:27.232700       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0925 10:34:27.232734       1 config.go:97] "Starting endpoint slice config controller"
	I0925 10:34:27.232760       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0925 10:34:27.233047       1 config.go:315] "Starting node config controller"
	I0925 10:34:27.233085       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0925 10:34:27.333424       1 shared_informer.go:318] Caches are synced for node config
	I0925 10:34:27.333462       1 shared_informer.go:318] Caches are synced for service config
	I0925 10:34:27.333490       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [5a87dfcd0e1a] <==
	* W0925 10:34:10.412769       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0925 10:34:10.413000       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0925 10:34:10.412552       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0925 10:34:10.413020       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0925 10:34:10.412572       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0925 10:34:10.413082       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0925 10:34:10.412878       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0925 10:34:10.413107       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0925 10:34:11.233945       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0925 10:34:11.233969       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0925 10:34:11.245555       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0925 10:34:11.245565       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0925 10:34:11.257234       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0925 10:34:11.257245       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0925 10:34:11.305366       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0925 10:34:11.305376       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0925 10:34:11.335532       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0925 10:34:11.335546       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0925 10:34:11.379250       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0925 10:34:11.379349       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0925 10:34:11.401540       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0925 10:34:11.401585       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0925 10:34:11.494359       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0925 10:34:11.494379       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0925 10:34:13.407721       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-09-25 10:33:55 UTC, ends at Mon 2023-09-25 11:06:05 UTC. --
	Sep 25 11:04:46 addons-183000 kubelet[2366]: I0925 11:04:46.395205    2366 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/94a4278a-2b52-4344-a640-8a01a54306c2-bpffs\") pod \"94a4278a-2b52-4344-a640-8a01a54306c2\" (UID: \"94a4278a-2b52-4344-a640-8a01a54306c2\") "
	Sep 25 11:04:46 addons-183000 kubelet[2366]: I0925 11:04:46.395215    2366 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/94a4278a-2b52-4344-a640-8a01a54306c2-cgroup" (OuterVolumeSpecName: "cgroup") pod "94a4278a-2b52-4344-a640-8a01a54306c2" (UID: "94a4278a-2b52-4344-a640-8a01a54306c2"). InnerVolumeSpecName "cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 25 11:04:46 addons-183000 kubelet[2366]: I0925 11:04:46.395218    2366 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bqwl5\" (UniqueName: \"kubernetes.io/projected/94a4278a-2b52-4344-a640-8a01a54306c2-kube-api-access-bqwl5\") pod \"94a4278a-2b52-4344-a640-8a01a54306c2\" (UID: \"94a4278a-2b52-4344-a640-8a01a54306c2\") "
	Sep 25 11:04:46 addons-183000 kubelet[2366]: I0925 11:04:46.395223    2366 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/94a4278a-2b52-4344-a640-8a01a54306c2-bpffs" (OuterVolumeSpecName: "bpffs") pod "94a4278a-2b52-4344-a640-8a01a54306c2" (UID: "94a4278a-2b52-4344-a640-8a01a54306c2"). InnerVolumeSpecName "bpffs". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 25 11:04:46 addons-183000 kubelet[2366]: I0925 11:04:46.395237    2366 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"modules\" (UniqueName: \"kubernetes.io/host-path/94a4278a-2b52-4344-a640-8a01a54306c2-modules\") pod \"94a4278a-2b52-4344-a640-8a01a54306c2\" (UID: \"94a4278a-2b52-4344-a640-8a01a54306c2\") "
	Sep 25 11:04:46 addons-183000 kubelet[2366]: I0925 11:04:46.395246    2366 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"debugfs\" (UniqueName: \"kubernetes.io/host-path/94a4278a-2b52-4344-a640-8a01a54306c2-debugfs\") pod \"94a4278a-2b52-4344-a640-8a01a54306c2\" (UID: \"94a4278a-2b52-4344-a640-8a01a54306c2\") "
	Sep 25 11:04:46 addons-183000 kubelet[2366]: I0925 11:04:46.395264    2366 reconciler_common.go:300] "Volume detached for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/94a4278a-2b52-4344-a640-8a01a54306c2-bpffs\") on node \"addons-183000\" DevicePath \"\""
	Sep 25 11:04:46 addons-183000 kubelet[2366]: I0925 11:04:46.395271    2366 reconciler_common.go:300] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/94a4278a-2b52-4344-a640-8a01a54306c2-host\") on node \"addons-183000\" DevicePath \"\""
	Sep 25 11:04:46 addons-183000 kubelet[2366]: I0925 11:04:46.395275    2366 reconciler_common.go:300] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/94a4278a-2b52-4344-a640-8a01a54306c2-run\") on node \"addons-183000\" DevicePath \"\""
	Sep 25 11:04:46 addons-183000 kubelet[2366]: I0925 11:04:46.395279    2366 reconciler_common.go:300] "Volume detached for volume \"cgroup\" (UniqueName: \"kubernetes.io/host-path/94a4278a-2b52-4344-a640-8a01a54306c2-cgroup\") on node \"addons-183000\" DevicePath \"\""
	Sep 25 11:04:46 addons-183000 kubelet[2366]: I0925 11:04:46.395288    2366 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/94a4278a-2b52-4344-a640-8a01a54306c2-debugfs" (OuterVolumeSpecName: "debugfs") pod "94a4278a-2b52-4344-a640-8a01a54306c2" (UID: "94a4278a-2b52-4344-a640-8a01a54306c2"). InnerVolumeSpecName "debugfs". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 25 11:04:46 addons-183000 kubelet[2366]: I0925 11:04:46.395319    2366 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/94a4278a-2b52-4344-a640-8a01a54306c2-modules" (OuterVolumeSpecName: "modules") pod "94a4278a-2b52-4344-a640-8a01a54306c2" (UID: "94a4278a-2b52-4344-a640-8a01a54306c2"). InnerVolumeSpecName "modules". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 25 11:04:46 addons-183000 kubelet[2366]: I0925 11:04:46.395822    2366 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94a4278a-2b52-4344-a640-8a01a54306c2-kube-api-access-bqwl5" (OuterVolumeSpecName: "kube-api-access-bqwl5") pod "94a4278a-2b52-4344-a640-8a01a54306c2" (UID: "94a4278a-2b52-4344-a640-8a01a54306c2"). InnerVolumeSpecName "kube-api-access-bqwl5". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 25 11:04:46 addons-183000 kubelet[2366]: I0925 11:04:46.496112    2366 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-bqwl5\" (UniqueName: \"kubernetes.io/projected/94a4278a-2b52-4344-a640-8a01a54306c2-kube-api-access-bqwl5\") on node \"addons-183000\" DevicePath \"\""
	Sep 25 11:04:46 addons-183000 kubelet[2366]: I0925 11:04:46.496137    2366 reconciler_common.go:300] "Volume detached for volume \"modules\" (UniqueName: \"kubernetes.io/host-path/94a4278a-2b52-4344-a640-8a01a54306c2-modules\") on node \"addons-183000\" DevicePath \"\""
	Sep 25 11:04:46 addons-183000 kubelet[2366]: I0925 11:04:46.496143    2366 reconciler_common.go:300] "Volume detached for volume \"debugfs\" (UniqueName: \"kubernetes.io/host-path/94a4278a-2b52-4344-a640-8a01a54306c2-debugfs\") on node \"addons-183000\" DevicePath \"\""
	Sep 25 11:04:47 addons-183000 kubelet[2366]: I0925 11:04:47.160182    2366 scope.go:117] "RemoveContainer" containerID="3214d7d3645b319f499cd4e473783f22248ab293d0b7bd09221747894dd5ebed"
	Sep 25 11:04:47 addons-183000 kubelet[2366]: I0925 11:04:47.169344    2366 scope.go:117] "RemoveContainer" containerID="3214d7d3645b319f499cd4e473783f22248ab293d0b7bd09221747894dd5ebed"
	Sep 25 11:04:47 addons-183000 kubelet[2366]: E0925 11:04:47.169739    2366 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 3214d7d3645b319f499cd4e473783f22248ab293d0b7bd09221747894dd5ebed" containerID="3214d7d3645b319f499cd4e473783f22248ab293d0b7bd09221747894dd5ebed"
	Sep 25 11:04:47 addons-183000 kubelet[2366]: I0925 11:04:47.169783    2366 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"3214d7d3645b319f499cd4e473783f22248ab293d0b7bd09221747894dd5ebed"} err="failed to get container status \"3214d7d3645b319f499cd4e473783f22248ab293d0b7bd09221747894dd5ebed\": rpc error: code = Unknown desc = Error response from daemon: No such container: 3214d7d3645b319f499cd4e473783f22248ab293d0b7bd09221747894dd5ebed"
	Sep 25 11:04:48 addons-183000 kubelet[2366]: I0925 11:04:48.959305    2366 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="94a4278a-2b52-4344-a640-8a01a54306c2" path="/var/lib/kubelet/pods/94a4278a-2b52-4344-a640-8a01a54306c2/volumes"
	Sep 25 11:05:12 addons-183000 kubelet[2366]: E0925 11:05:12.961373    2366 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 25 11:05:12 addons-183000 kubelet[2366]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 25 11:05:12 addons-183000 kubelet[2366]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 25 11:05:12 addons-183000 kubelet[2366]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p addons-183000 -n addons-183000
helpers_test.go:261: (dbg) Run:  kubectl --context addons-183000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/MetricsServer (720.82s)

                                                
                                    
x
+
TestAddons/parallel/CSI (720.85s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:535: failed waiting for csi-hostpath-driver pods to stabilize: context deadline exceeded
addons_test.go:537: csi-hostpath-driver pods stabilized in 6m0.0022005s
addons_test.go:540: (dbg) Run:  kubectl --context addons-183000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:545: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-183000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:546: failed waiting for PVC hpvc: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p addons-183000 -n addons-183000
helpers_test.go:244: <<< TestAddons/parallel/CSI FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/CSI]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p addons-183000 logs -n 25
helpers_test.go:252: TestAddons/parallel/CSI logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-427000 | jenkins | v1.31.2 | 25 Sep 23 03:33 PDT |                     |
	|         | -p download-only-427000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| start   | -o=json --download-only        | download-only-427000 | jenkins | v1.31.2 | 25 Sep 23 03:33 PDT |                     |
	|         | -p download-only-427000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.31.2 | 25 Sep 23 03:33 PDT | 25 Sep 23 03:33 PDT |
	| delete  | -p download-only-427000        | download-only-427000 | jenkins | v1.31.2 | 25 Sep 23 03:33 PDT | 25 Sep 23 03:33 PDT |
	| delete  | -p download-only-427000        | download-only-427000 | jenkins | v1.31.2 | 25 Sep 23 03:33 PDT | 25 Sep 23 03:33 PDT |
	| start   | --download-only -p             | binary-mirror-317000 | jenkins | v1.31.2 | 25 Sep 23 03:33 PDT |                     |
	|         | binary-mirror-317000           |                      |         |         |                     |                     |
	|         | --alsologtostderr              |                      |         |         |                     |                     |
	|         | --binary-mirror                |                      |         |         |                     |                     |
	|         | http://127.0.0.1:49310         |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-317000        | binary-mirror-317000 | jenkins | v1.31.2 | 25 Sep 23 03:33 PDT | 25 Sep 23 03:33 PDT |
	| start   | -p addons-183000               | addons-183000        | jenkins | v1.31.2 | 25 Sep 23 03:33 PDT | 25 Sep 23 03:40 PDT |
	|         | --wait=true --memory=4000      |                      |         |         |                     |                     |
	|         | --alsologtostderr              |                      |         |         |                     |                     |
	|         | --addons=registry              |                      |         |         |                     |                     |
	|         | --addons=metrics-server        |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots       |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver   |                      |         |         |                     |                     |
	|         | --addons=gcp-auth              |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner         |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget      |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|         | --addons=ingress               |                      |         |         |                     |                     |
	|         | --addons=ingress-dns           |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p       | addons-183000        | jenkins | v1.31.2 | 25 Sep 23 03:52 PDT |                     |
	|         | addons-183000                  |                      |         |         |                     |                     |
	| addons  | enable headlamp                | addons-183000        | jenkins | v1.31.2 | 25 Sep 23 03:52 PDT | 25 Sep 23 03:52 PDT |
	|         | -p addons-183000               |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/25 03:33:43
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.21.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0925 03:33:43.113263    1555 out.go:296] Setting OutFile to fd 1 ...
	I0925 03:33:43.113390    1555 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 03:33:43.113393    1555 out.go:309] Setting ErrFile to fd 2...
	I0925 03:33:43.113395    1555 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 03:33:43.113522    1555 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17297-1010/.minikube/bin
	I0925 03:33:43.114539    1555 out.go:303] Setting JSON to false
	I0925 03:33:43.129689    1555 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":198,"bootTime":1695637825,"procs":391,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.6","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0925 03:33:43.129759    1555 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0925 03:33:43.134529    1555 out.go:177] * [addons-183000] minikube v1.31.2 on Darwin 13.6 (arm64)
	I0925 03:33:43.141636    1555 out.go:177]   - MINIKUBE_LOCATION=17297
	I0925 03:33:43.145595    1555 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17297-1010/kubeconfig
	I0925 03:33:43.141675    1555 notify.go:220] Checking for updates...
	I0925 03:33:43.149882    1555 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0925 03:33:43.152528    1555 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 03:33:43.155561    1555 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17297-1010/.minikube
	I0925 03:33:43.158461    1555 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0925 03:33:43.161685    1555 driver.go:373] Setting default libvirt URI to qemu:///system
	I0925 03:33:43.165518    1555 out.go:177] * Using the qemu2 driver based on user configuration
	I0925 03:33:43.170494    1555 start.go:298] selected driver: qemu2
	I0925 03:33:43.170500    1555 start.go:902] validating driver "qemu2" against <nil>
	I0925 03:33:43.170505    1555 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0925 03:33:43.172415    1555 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0925 03:33:43.175485    1555 out.go:177] * Automatically selected the socket_vmnet network
	I0925 03:33:43.178631    1555 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0925 03:33:43.178656    1555 cni.go:84] Creating CNI manager for ""
	I0925 03:33:43.178667    1555 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0925 03:33:43.178671    1555 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0925 03:33:43.178683    1555 start_flags.go:321] config:
	{Name:addons-183000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:addons-183000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID
:0 AutoPauseInterval:1m0s}
	I0925 03:33:43.182821    1555 iso.go:125] acquiring lock: {Name:mkf881a60cf9fd1672567914305ff6f7a4f13809 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 03:33:43.186491    1555 out.go:177] * Starting control plane node addons-183000 in cluster addons-183000
	I0925 03:33:43.194499    1555 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0925 03:33:43.194520    1555 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I0925 03:33:43.194535    1555 cache.go:57] Caching tarball of preloaded images
	I0925 03:33:43.194599    1555 preload.go:174] Found /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0925 03:33:43.194605    1555 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I0925 03:33:43.194819    1555 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/config.json ...
	I0925 03:33:43.194831    1555 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/config.json: {Name:mk49657fba0a0e3293097f9bbbd8574691cb2471 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 03:33:43.195036    1555 start.go:365] acquiring machines lock for addons-183000: {Name:mk918d99818e2cf21e5912bc291ed18d4b442ba6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 03:33:43.195158    1555 start.go:369] acquired machines lock for "addons-183000" in 116.458µs
	I0925 03:33:43.195167    1555 start.go:93] Provisioning new machine with config: &{Name:addons-183000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.2 ClusterName:addons-183000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 03:33:43.195202    1555 start.go:125] createHost starting for "" (driver="qemu2")
	I0925 03:33:43.203570    1555 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0925 03:33:43.526310    1555 start.go:159] libmachine.API.Create for "addons-183000" (driver="qemu2")
	I0925 03:33:43.526360    1555 client.go:168] LocalClient.Create starting
	I0925 03:33:43.526524    1555 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/ca.pem
	I0925 03:33:43.685162    1555 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/cert.pem
	I0925 03:33:43.725069    1555 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17297-1010/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I0925 03:33:44.270899    1555 main.go:141] libmachine: Creating SSH key...
	I0925 03:33:44.356373    1555 main.go:141] libmachine: Creating Disk image...
	I0925 03:33:44.356381    1555 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0925 03:33:44.356565    1555 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/disk.qcow2.raw /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/disk.qcow2
	I0925 03:33:44.389562    1555 main.go:141] libmachine: STDOUT: 
	I0925 03:33:44.389584    1555 main.go:141] libmachine: STDERR: 
	I0925 03:33:44.389658    1555 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/disk.qcow2 +20000M
	I0925 03:33:44.397120    1555 main.go:141] libmachine: STDOUT: Image resized.
	
	I0925 03:33:44.397139    1555 main.go:141] libmachine: STDERR: 
	I0925 03:33:44.397152    1555 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/disk.qcow2
	I0925 03:33:44.397157    1555 main.go:141] libmachine: Starting QEMU VM...
	I0925 03:33:44.397194    1555 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:70:b3:50:3d:bc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/disk.qcow2
	I0925 03:33:44.464471    1555 main.go:141] libmachine: STDOUT: 
	I0925 03:33:44.464499    1555 main.go:141] libmachine: STDERR: 
	I0925 03:33:44.464503    1555 main.go:141] libmachine: Attempt 0
	I0925 03:33:44.464522    1555 main.go:141] libmachine: Searching for 4e:70:b3:50:3d:bc in /var/db/dhcpd_leases ...
	I0925 03:33:46.465678    1555 main.go:141] libmachine: Attempt 1
	I0925 03:33:46.465761    1555 main.go:141] libmachine: Searching for 4e:70:b3:50:3d:bc in /var/db/dhcpd_leases ...
	I0925 03:33:48.467021    1555 main.go:141] libmachine: Attempt 2
	I0925 03:33:48.467061    1555 main.go:141] libmachine: Searching for 4e:70:b3:50:3d:bc in /var/db/dhcpd_leases ...
	I0925 03:33:50.468194    1555 main.go:141] libmachine: Attempt 3
	I0925 03:33:50.468212    1555 main.go:141] libmachine: Searching for 4e:70:b3:50:3d:bc in /var/db/dhcpd_leases ...
	I0925 03:33:52.469241    1555 main.go:141] libmachine: Attempt 4
	I0925 03:33:52.469258    1555 main.go:141] libmachine: Searching for 4e:70:b3:50:3d:bc in /var/db/dhcpd_leases ...
	I0925 03:33:54.470316    1555 main.go:141] libmachine: Attempt 5
	I0925 03:33:54.470352    1555 main.go:141] libmachine: Searching for 4e:70:b3:50:3d:bc in /var/db/dhcpd_leases ...
	I0925 03:33:56.471428    1555 main.go:141] libmachine: Attempt 6
	I0925 03:33:56.471461    1555 main.go:141] libmachine: Searching for 4e:70:b3:50:3d:bc in /var/db/dhcpd_leases ...
	I0925 03:33:56.471625    1555 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0925 03:33:56.471679    1555 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:4e:70:b3:50:3d:bc ID:1,4e:70:b3:50:3d:bc Lease:0x6512b393}
	I0925 03:33:56.471685    1555 main.go:141] libmachine: Found match: 4e:70:b3:50:3d:bc
	I0925 03:33:56.471705    1555 main.go:141] libmachine: IP: 192.168.105.2
	I0925 03:33:56.471714    1555 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.2)...
	I0925 03:33:57.476002    1555 machine.go:88] provisioning docker machine ...
	I0925 03:33:57.476029    1555 buildroot.go:166] provisioning hostname "addons-183000"
	I0925 03:33:57.476399    1555 main.go:141] libmachine: Using SSH client type: native
	I0925 03:33:57.476656    1555 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100de8760] 0x100deaed0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0925 03:33:57.476663    1555 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-183000 && echo "addons-183000" | sudo tee /etc/hostname
	I0925 03:33:57.549226    1555 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-183000
	
	I0925 03:33:57.549294    1555 main.go:141] libmachine: Using SSH client type: native
	I0925 03:33:57.549565    1555 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100de8760] 0x100deaed0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0925 03:33:57.549580    1555 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-183000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-183000/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-183000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0925 03:33:57.619664    1555 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0925 03:33:57.619678    1555 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17297-1010/.minikube CaCertPath:/Users/jenkins/minikube-integration/17297-1010/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17297-1010/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17297-1010/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17297-1010/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17297-1010/.minikube}
	I0925 03:33:57.619692    1555 buildroot.go:174] setting up certificates
	I0925 03:33:57.619698    1555 provision.go:83] configureAuth start
	I0925 03:33:57.619702    1555 provision.go:138] copyHostCerts
	I0925 03:33:57.619800    1555 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17297-1010/.minikube/key.pem (1679 bytes)
	I0925 03:33:57.620015    1555 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17297-1010/.minikube/ca.pem (1082 bytes)
	I0925 03:33:57.620106    1555 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17297-1010/.minikube/cert.pem (1123 bytes)
	I0925 03:33:57.620180    1555 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17297-1010/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17297-1010/.minikube/certs/ca-key.pem org=jenkins.addons-183000 san=[192.168.105.2 192.168.105.2 localhost 127.0.0.1 minikube addons-183000]
	I0925 03:33:57.680529    1555 provision.go:172] copyRemoteCerts
	I0925 03:33:57.680584    1555 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0925 03:33:57.680600    1555 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/id_rsa Username:docker}
	I0925 03:33:57.716693    1555 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0925 03:33:57.724070    1555 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0925 03:33:57.731348    1555 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0925 03:33:57.738044    1555 provision.go:86] duration metric: configureAuth took 118.340875ms
	I0925 03:33:57.738067    1555 buildroot.go:189] setting minikube options for container-runtime
	I0925 03:33:57.738181    1555 config.go:182] Loaded profile config "addons-183000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0925 03:33:57.738225    1555 main.go:141] libmachine: Using SSH client type: native
	I0925 03:33:57.738442    1555 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100de8760] 0x100deaed0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0925 03:33:57.738446    1555 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0925 03:33:57.806528    1555 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0925 03:33:57.806536    1555 buildroot.go:70] root file system type: tmpfs
	I0925 03:33:57.806591    1555 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0925 03:33:57.806639    1555 main.go:141] libmachine: Using SSH client type: native
	I0925 03:33:57.806901    1555 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100de8760] 0x100deaed0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0925 03:33:57.806939    1555 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0925 03:33:57.879305    1555 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0925 03:33:57.879349    1555 main.go:141] libmachine: Using SSH client type: native
	I0925 03:33:57.879600    1555 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100de8760] 0x100deaed0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0925 03:33:57.879612    1555 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0925 03:33:58.218156    1555 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0925 03:33:58.218176    1555 machine.go:91] provisioned docker machine in 742.178459ms
	I0925 03:33:58.218184    1555 client.go:171] LocalClient.Create took 14.692090292s
	I0925 03:33:58.218196    1555 start.go:167] duration metric: libmachine.API.Create for "addons-183000" took 14.692162542s
	I0925 03:33:58.218201    1555 start.go:300] post-start starting for "addons-183000" (driver="qemu2")
	I0925 03:33:58.218213    1555 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0925 03:33:58.218288    1555 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0925 03:33:58.218298    1555 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/id_rsa Username:docker}
	I0925 03:33:58.255037    1555 ssh_runner.go:195] Run: cat /etc/os-release
	I0925 03:33:58.256454    1555 info.go:137] Remote host: Buildroot 2021.02.12
	I0925 03:33:58.256461    1555 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17297-1010/.minikube/addons for local assets ...
	I0925 03:33:58.256533    1555 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17297-1010/.minikube/files for local assets ...
	I0925 03:33:58.256562    1555 start.go:303] post-start completed in 38.354459ms
	I0925 03:33:58.256920    1555 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/config.json ...
	I0925 03:33:58.257077    1555 start.go:128] duration metric: createHost completed in 15.062148875s
	I0925 03:33:58.257104    1555 main.go:141] libmachine: Using SSH client type: native
	I0925 03:33:58.257337    1555 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100de8760] 0x100deaed0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0925 03:33:58.257341    1555 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0925 03:33:58.325173    1555 main.go:141] libmachine: SSH cmd err, output: <nil>: 1695638038.462407626
	
	I0925 03:33:58.325184    1555 fix.go:206] guest clock: 1695638038.462407626
	I0925 03:33:58.325188    1555 fix.go:219] Guest: 2023-09-25 03:33:58.462407626 -0700 PDT Remote: 2023-09-25 03:33:58.257082 -0700 PDT m=+15.162425626 (delta=205.325626ms)
	I0925 03:33:58.325199    1555 fix.go:190] guest clock delta is within tolerance: 205.325626ms
	I0925 03:33:58.325201    1555 start.go:83] releasing machines lock for "addons-183000", held for 15.130317917s
	I0925 03:33:58.325486    1555 ssh_runner.go:195] Run: cat /version.json
	I0925 03:33:58.325494    1555 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/id_rsa Username:docker}
	I0925 03:33:58.325516    1555 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0925 03:33:58.325555    1555 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/id_rsa Username:docker}
	I0925 03:33:58.361340    1555 ssh_runner.go:195] Run: systemctl --version
	I0925 03:33:58.402839    1555 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0925 03:33:58.404630    1555 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0925 03:33:58.404664    1555 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0925 03:33:58.409389    1555 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0925 03:33:58.409398    1555 start.go:469] detecting cgroup driver to use...
	I0925 03:33:58.409504    1555 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0925 03:33:58.414731    1555 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0925 03:33:58.417759    1555 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0925 03:33:58.420882    1555 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0925 03:33:58.420905    1555 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0925 03:33:58.424376    1555 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0925 03:33:58.427971    1555 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0925 03:33:58.431438    1555 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0925 03:33:58.434555    1555 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0925 03:33:58.437481    1555 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0925 03:33:58.440650    1555 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0925 03:33:58.444117    1555 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0925 03:33:58.446963    1555 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0925 03:33:58.506828    1555 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0925 03:33:58.515326    1555 start.go:469] detecting cgroup driver to use...
	I0925 03:33:58.515396    1555 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0925 03:33:58.520350    1555 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0925 03:33:58.525290    1555 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0925 03:33:58.532641    1555 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0925 03:33:58.537661    1555 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0925 03:33:58.542291    1555 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0925 03:33:58.583433    1555 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0925 03:33:58.588627    1555 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0925 03:33:58.594011    1555 ssh_runner.go:195] Run: which cri-dockerd
	I0925 03:33:58.595317    1555 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0925 03:33:58.597772    1555 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0925 03:33:58.602614    1555 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0925 03:33:58.687592    1555 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0925 03:33:58.763371    1555 docker.go:554] configuring docker to use "cgroupfs" as cgroup driver...
	I0925 03:33:58.763431    1555 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0925 03:33:58.768807    1555 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0925 03:33:58.850856    1555 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0925 03:34:00.021109    1555 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.170257708s)
	I0925 03:34:00.021184    1555 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0925 03:34:00.102397    1555 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0925 03:34:00.182389    1555 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0925 03:34:00.242288    1555 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0925 03:34:00.310048    1555 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0925 03:34:00.320927    1555 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0925 03:34:00.397773    1555 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0925 03:34:00.421934    1555 start.go:516] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0925 03:34:00.422022    1555 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0925 03:34:00.424107    1555 start.go:537] Will wait 60s for crictl version
	I0925 03:34:00.424134    1555 ssh_runner.go:195] Run: which crictl
	I0925 03:34:00.425400    1555 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0925 03:34:00.448268    1555 start.go:553] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.6
	RuntimeApiVersion:  v1
	I0925 03:34:00.448328    1555 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0925 03:34:00.458640    1555 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0925 03:34:00.474285    1555 out.go:204] * Preparing Kubernetes v1.28.2 on Docker 24.0.6 ...
	I0925 03:34:00.474362    1555 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0925 03:34:00.475766    1555 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0925 03:34:00.479918    1555 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0925 03:34:00.479959    1555 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0925 03:34:00.485137    1555 docker.go:664] Got preloaded images: 
	I0925 03:34:00.485144    1555 docker.go:670] registry.k8s.io/kube-apiserver:v1.28.2 wasn't preloaded
	I0925 03:34:00.485184    1555 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0925 03:34:00.488328    1555 ssh_runner.go:195] Run: which lz4
	I0925 03:34:00.489753    1555 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0925 03:34:00.490946    1555 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0925 03:34:00.490958    1555 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (356993689 bytes)
	I0925 03:34:01.821604    1555 docker.go:628] Took 1.331913 seconds to copy over tarball
	I0925 03:34:01.821663    1555 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0925 03:34:02.850635    1555 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.028977875s)
	I0925 03:34:02.850646    1555 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0925 03:34:02.866214    1555 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0925 03:34:02.869216    1555 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0925 03:34:02.874196    1555 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0925 03:34:02.955148    1555 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0925 03:34:05.167252    1555 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.212127209s)
	I0925 03:34:05.167356    1555 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0925 03:34:05.173293    1555 docker.go:664] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.2
	registry.k8s.io/kube-scheduler:v1.28.2
	registry.k8s.io/kube-proxy:v1.28.2
	registry.k8s.io/kube-controller-manager:v1.28.2
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0925 03:34:05.173304    1555 cache_images.go:84] Images are preloaded, skipping loading
	I0925 03:34:05.173372    1555 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0925 03:34:05.180961    1555 cni.go:84] Creating CNI manager for ""
	I0925 03:34:05.180975    1555 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0925 03:34:05.180995    1555 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0925 03:34:05.181006    1555 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.2 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-183000 NodeName:addons-183000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0925 03:34:05.181071    1555 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-183000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0925 03:34:05.181111    1555 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=addons-183000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:addons-183000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0925 03:34:05.181162    1555 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I0925 03:34:05.184441    1555 binaries.go:44] Found k8s binaries, skipping transfer
	I0925 03:34:05.184477    1555 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0925 03:34:05.187654    1555 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I0925 03:34:05.192980    1555 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0925 03:34:05.197983    1555 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2099 bytes)
	I0925 03:34:05.202799    1555 ssh_runner.go:195] Run: grep 192.168.105.2	control-plane.minikube.internal$ /etc/hosts
	I0925 03:34:05.204148    1555 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0925 03:34:05.208295    1555 certs.go:56] Setting up /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000 for IP: 192.168.105.2
	I0925 03:34:05.208303    1555 certs.go:190] acquiring lock for shared ca certs: {Name:mk095b03680bcdeba6c321a9f458c9fbafa67639 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 03:34:05.208463    1555 certs.go:204] generating minikubeCA CA: /Users/jenkins/minikube-integration/17297-1010/.minikube/ca.key
	I0925 03:34:05.279404    1555 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17297-1010/.minikube/ca.crt ...
	I0925 03:34:05.279413    1555 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17297-1010/.minikube/ca.crt: {Name:mk70f9fc8ba800117a8a8b4d751d3a98c619cb54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 03:34:05.279591    1555 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17297-1010/.minikube/ca.key ...
	I0925 03:34:05.279595    1555 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17297-1010/.minikube/ca.key: {Name:mkd44aa01a2f3e5b978643c9a3feb1028c2bb791 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 03:34:05.279712    1555 certs.go:204] generating proxyClientCA CA: /Users/jenkins/minikube-integration/17297-1010/.minikube/proxy-client-ca.key
	I0925 03:34:05.342350    1555 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17297-1010/.minikube/proxy-client-ca.crt ...
	I0925 03:34:05.342356    1555 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17297-1010/.minikube/proxy-client-ca.crt: {Name:mkc0af119bea050a868312bfe8f89d742604990c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 03:34:05.342558    1555 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17297-1010/.minikube/proxy-client-ca.key ...
	I0925 03:34:05.342563    1555 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17297-1010/.minikube/proxy-client-ca.key: {Name:mka9b8c6393173e2358c8b84eb9bff6ea6851f33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 03:34:05.342694    1555 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/client.key
	I0925 03:34:05.342700    1555 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/client.crt with IP's: []
	I0925 03:34:05.380999    1555 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/client.crt ...
	I0925 03:34:05.381013    1555 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/client.crt: {Name:mkec4b98dbbfb657baac4f5fae18fe43bd8b5970 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 03:34:05.381125    1555 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/client.key ...
	I0925 03:34:05.381130    1555 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/client.key: {Name:mk8be81ea1673fa1894559e8faa2fa2323674614 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 03:34:05.381227    1555 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/apiserver.key.96055969
	I0925 03:34:05.381235    1555 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/apiserver.crt.96055969 with IP's: [192.168.105.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0925 03:34:05.441721    1555 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/apiserver.crt.96055969 ...
	I0925 03:34:05.441725    1555 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/apiserver.crt.96055969: {Name:mkba38dc1a56241112b86d1503bca4f2588c1bf7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 03:34:05.441849    1555 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/apiserver.key.96055969 ...
	I0925 03:34:05.441852    1555 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/apiserver.key.96055969: {Name:mk41423e9550dcb3371da4467db52078d1bb4d78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 03:34:05.441956    1555 certs.go:337] copying /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/apiserver.crt.96055969 -> /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/apiserver.crt
	I0925 03:34:05.442053    1555 certs.go:341] copying /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/apiserver.key.96055969 -> /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/apiserver.key
	I0925 03:34:05.442146    1555 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/proxy-client.key
	I0925 03:34:05.442154    1555 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/proxy-client.crt with IP's: []
	I0925 03:34:05.578079    1555 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/proxy-client.crt ...
	I0925 03:34:05.578082    1555 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/proxy-client.crt: {Name:mkbd132fd7a0f2cb28d572f95bd43c9a1ef215f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 03:34:05.578216    1555 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/proxy-client.key ...
	I0925 03:34:05.578218    1555 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/proxy-client.key: {Name:mkf93f480df65e887c0e782806fe1d821d05370d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 03:34:05.578436    1555 certs.go:437] found cert: /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/Users/jenkins/minikube-integration/17297-1010/.minikube/certs/ca-key.pem (1675 bytes)
	I0925 03:34:05.578458    1555 certs.go:437] found cert: /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/Users/jenkins/minikube-integration/17297-1010/.minikube/certs/ca.pem (1082 bytes)
	I0925 03:34:05.578479    1555 certs.go:437] found cert: /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/Users/jenkins/minikube-integration/17297-1010/.minikube/certs/cert.pem (1123 bytes)
	I0925 03:34:05.578499    1555 certs.go:437] found cert: /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/Users/jenkins/minikube-integration/17297-1010/.minikube/certs/key.pem (1679 bytes)
	I0925 03:34:05.578876    1555 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0925 03:34:05.587435    1555 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0925 03:34:05.594545    1555 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0925 03:34:05.601433    1555 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0925 03:34:05.608504    1555 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1010/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0925 03:34:05.616247    1555 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1010/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0925 03:34:05.623555    1555 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1010/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0925 03:34:05.630877    1555 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1010/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0925 03:34:05.637827    1555 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1010/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0925 03:34:05.644421    1555 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0925 03:34:05.650432    1555 ssh_runner.go:195] Run: openssl version
	I0925 03:34:05.652383    1555 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0925 03:34:05.655860    1555 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0925 03:34:05.657450    1555 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 25 10:34 /usr/share/ca-certificates/minikubeCA.pem
	I0925 03:34:05.657472    1555 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0925 03:34:05.659354    1555 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0925 03:34:05.662355    1555 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0925 03:34:05.663775    1555 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0925 03:34:05.663811    1555 kubeadm.go:404] StartCluster: {Name:addons-183000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.2 ClusterName:addons-183000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0925 03:34:05.663875    1555 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0925 03:34:05.669363    1555 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0925 03:34:05.672641    1555 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0925 03:34:05.675788    1555 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0925 03:34:05.678955    1555 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0925 03:34:05.678977    1555 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0925 03:34:05.700129    1555 kubeadm.go:322] [init] Using Kubernetes version: v1.28.2
	I0925 03:34:05.700165    1555 kubeadm.go:322] [preflight] Running pre-flight checks
	I0925 03:34:05.762507    1555 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0925 03:34:05.762580    1555 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0925 03:34:05.762631    1555 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0925 03:34:05.856523    1555 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0925 03:34:05.862696    1555 out.go:204]   - Generating certificates and keys ...
	I0925 03:34:05.862744    1555 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0925 03:34:05.862781    1555 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0925 03:34:05.954799    1555 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0925 03:34:06.088347    1555 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0925 03:34:06.179074    1555 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0925 03:34:06.367263    1555 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0925 03:34:06.441263    1555 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0925 03:34:06.441326    1555 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-183000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0925 03:34:06.679555    1555 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0925 03:34:06.679622    1555 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-183000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0925 03:34:06.780717    1555 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0925 03:34:06.934557    1555 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0925 03:34:07.004571    1555 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0925 03:34:07.004599    1555 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0925 03:34:07.096444    1555 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0925 03:34:07.197087    1555 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0925 03:34:07.295019    1555 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0925 03:34:07.459088    1555 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0925 03:34:07.459841    1555 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0925 03:34:07.461016    1555 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0925 03:34:07.464311    1555 out.go:204]   - Booting up control plane ...
	I0925 03:34:07.464429    1555 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0925 03:34:07.464523    1555 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0925 03:34:07.464562    1555 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0925 03:34:07.468573    1555 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0925 03:34:07.468914    1555 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0925 03:34:07.468980    1555 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0925 03:34:07.551081    1555 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0925 03:34:11.552205    1555 kubeadm.go:322] [apiclient] All control plane components are healthy after 4.001307 seconds
	I0925 03:34:11.552277    1555 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0925 03:34:11.558090    1555 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0925 03:34:12.066492    1555 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0925 03:34:12.066604    1555 kubeadm.go:322] [mark-control-plane] Marking the node addons-183000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0925 03:34:12.571455    1555 kubeadm.go:322] [bootstrap-token] Using token: dcud0i.8u8422zl7jahtpxe
	I0925 03:34:12.577836    1555 out.go:204]   - Configuring RBAC rules ...
	I0925 03:34:12.577916    1555 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0925 03:34:12.580042    1555 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0925 03:34:12.583046    1555 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0925 03:34:12.584193    1555 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0925 03:34:12.585457    1555 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0925 03:34:12.586636    1555 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0925 03:34:12.592832    1555 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0925 03:34:12.757427    1555 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0925 03:34:12.982058    1555 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0925 03:34:12.982629    1555 kubeadm.go:322] 
	I0925 03:34:12.982664    1555 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0925 03:34:12.982667    1555 kubeadm.go:322] 
	I0925 03:34:12.982715    1555 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0925 03:34:12.982721    1555 kubeadm.go:322] 
	I0925 03:34:12.982735    1555 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0925 03:34:12.982762    1555 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0925 03:34:12.982824    1555 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0925 03:34:12.982828    1555 kubeadm.go:322] 
	I0925 03:34:12.982852    1555 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0925 03:34:12.982856    1555 kubeadm.go:322] 
	I0925 03:34:12.982895    1555 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0925 03:34:12.982898    1555 kubeadm.go:322] 
	I0925 03:34:12.982927    1555 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0925 03:34:12.982998    1555 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0925 03:34:12.983041    1555 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0925 03:34:12.983046    1555 kubeadm.go:322] 
	I0925 03:34:12.983087    1555 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0925 03:34:12.983123    1555 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0925 03:34:12.983125    1555 kubeadm.go:322] 
	I0925 03:34:12.983172    1555 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token dcud0i.8u8422zl7jahtpxe \
	I0925 03:34:12.983225    1555 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:3fc5fb926713648f8638ba10da0d4f45584d32929bcc07af5ada491c000ad47e \
	I0925 03:34:12.983240    1555 kubeadm.go:322] 	--control-plane 
	I0925 03:34:12.983242    1555 kubeadm.go:322] 
	I0925 03:34:12.983281    1555 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0925 03:34:12.983285    1555 kubeadm.go:322] 
	I0925 03:34:12.983328    1555 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token dcud0i.8u8422zl7jahtpxe \
	I0925 03:34:12.983387    1555 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:3fc5fb926713648f8638ba10da0d4f45584d32929bcc07af5ada491c000ad47e 
	I0925 03:34:12.983463    1555 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0925 03:34:12.983472    1555 cni.go:84] Creating CNI manager for ""
	I0925 03:34:12.983479    1555 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0925 03:34:12.992098    1555 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0925 03:34:12.995235    1555 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0925 03:34:12.999700    1555 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0925 03:34:13.004656    1555 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0925 03:34:13.004755    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=1bf6c3d5317028f348e55ea19d261973a6487d3c minikube.k8s.io/name=addons-183000 minikube.k8s.io/updated_at=2023_09_25T03_34_13_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:13.004757    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:13.008164    1555 ops.go:34] apiserver oom_adj: -16
	I0925 03:34:13.063625    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:13.095139    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:13.629666    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:14.129649    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:14.629662    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:15.129655    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:15.629628    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:16.129723    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:16.629660    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:17.129683    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:17.629643    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:18.129619    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:18.629638    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:19.129594    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:19.629589    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:20.129625    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:20.629540    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:21.129598    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:21.629573    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:22.129550    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:22.629493    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:23.129517    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:23.629511    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:24.129464    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:24.629448    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:25.129565    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:25.629529    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:26.129496    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:26.629436    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:26.667079    1555 kubeadm.go:1081] duration metric: took 13.662618083s to wait for elevateKubeSystemPrivileges.
	I0925 03:34:26.667097    1555 kubeadm.go:406] StartCluster complete in 21.003673917s
	I0925 03:34:26.667106    1555 settings.go:142] acquiring lock: {Name:mkb5a0822179f07ef9369c44aa9b64eb9ef74eed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 03:34:26.667266    1555 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/17297-1010/kubeconfig
	I0925 03:34:26.667431    1555 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17297-1010/kubeconfig: {Name:mkaa9d09ca2bf27c1a43efc9acf938adcc68343d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 03:34:26.667677    1555 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0925 03:34:26.667722    1555 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:true]
	I0925 03:34:26.667779    1555 addons.go:69] Setting volumesnapshots=true in profile "addons-183000"
	I0925 03:34:26.667782    1555 addons.go:69] Setting cloud-spanner=true in profile "addons-183000"
	I0925 03:34:26.667785    1555 addons.go:231] Setting addon volumesnapshots=true in "addons-183000"
	I0925 03:34:26.667789    1555 addons.go:231] Setting addon cloud-spanner=true in "addons-183000"
	I0925 03:34:26.667790    1555 addons.go:69] Setting default-storageclass=true in profile "addons-183000"
	I0925 03:34:26.667799    1555 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-183000"
	I0925 03:34:26.667820    1555 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-183000"
	I0925 03:34:26.667848    1555 host.go:66] Checking if "addons-183000" exists ...
	I0925 03:34:26.667850    1555 addons.go:69] Setting registry=true in profile "addons-183000"
	I0925 03:34:26.667858    1555 addons.go:231] Setting addon registry=true in "addons-183000"
	I0925 03:34:26.667849    1555 addons.go:69] Setting metrics-server=true in profile "addons-183000"
	I0925 03:34:26.667873    1555 addons.go:231] Setting addon metrics-server=true in "addons-183000"
	I0925 03:34:26.667880    1555 host.go:66] Checking if "addons-183000" exists ...
	I0925 03:34:26.667881    1555 addons.go:69] Setting gcp-auth=true in profile "addons-183000"
	I0925 03:34:26.667902    1555 mustload.go:65] Loading cluster: addons-183000
	I0925 03:34:26.667915    1555 host.go:66] Checking if "addons-183000" exists ...
	I0925 03:34:26.667948    1555 host.go:66] Checking if "addons-183000" exists ...
	I0925 03:34:26.667977    1555 config.go:182] Loaded profile config "addons-183000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0925 03:34:26.668033    1555 addons.go:69] Setting ingress-dns=true in profile "addons-183000"
	I0925 03:34:26.668035    1555 addons.go:69] Setting inspektor-gadget=true in profile "addons-183000"
	I0925 03:34:26.668042    1555 addons.go:69] Setting storage-provisioner=true in profile "addons-183000"
	I0925 03:34:26.668047    1555 addons.go:231] Setting addon storage-provisioner=true in "addons-183000"
	I0925 03:34:26.668049    1555 addons.go:231] Setting addon inspektor-gadget=true in "addons-183000"
	I0925 03:34:26.668059    1555 host.go:66] Checking if "addons-183000" exists ...
	I0925 03:34:26.668076    1555 host.go:66] Checking if "addons-183000" exists ...
	W0925 03:34:26.668189    1555 host.go:54] host status for "addons-183000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/monitor: connect: connection refused
	W0925 03:34:26.668197    1555 addons.go:277] "addons-183000" is not running, setting volumesnapshots=true and skipping enablement (err=<nil>)
	I0925 03:34:26.667780    1555 addons.go:69] Setting ingress=true in profile "addons-183000"
	I0925 03:34:26.668202    1555 addons.go:231] Setting addon ingress=true in "addons-183000"
	I0925 03:34:26.668215    1555 host.go:66] Checking if "addons-183000" exists ...
	W0925 03:34:26.668271    1555 host.go:54] host status for "addons-183000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/monitor: connect: connection refused
	W0925 03:34:26.668277    1555 addons.go:277] "addons-183000" is not running, setting storage-provisioner=true and skipping enablement (err=<nil>)
	I0925 03:34:26.668038    1555 addons.go:231] Setting addon ingress-dns=true in "addons-183000"
	I0925 03:34:26.668289    1555 host.go:66] Checking if "addons-183000" exists ...
	I0925 03:34:26.667873    1555 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-183000"
	I0925 03:34:26.668351    1555 host.go:66] Checking if "addons-183000" exists ...
	W0925 03:34:26.668420    1555 host.go:54] host status for "addons-183000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/monitor: connect: connection refused
	W0925 03:34:26.668426    1555 addons.go:277] "addons-183000" is not running, setting ingress=true and skipping enablement (err=<nil>)
	I0925 03:34:26.668428    1555 addons.go:467] Verifying addon ingress=true in "addons-183000"
	I0925 03:34:26.671815    1555 out.go:177] * Verifying ingress addon...
	I0925 03:34:26.668077    1555 config.go:182] Loaded profile config "addons-183000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	W0925 03:34:26.668443    1555 host.go:54] host status for "addons-183000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/monitor: connect: connection refused
	W0925 03:34:26.668492    1555 host.go:54] host status for "addons-183000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/monitor: connect: connection refused
	W0925 03:34:26.668560    1555 host.go:54] host status for "addons-183000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/monitor: connect: connection refused
	W0925 03:34:26.668562    1555 host.go:54] host status for "addons-183000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/monitor: connect: connection refused
	W0925 03:34:26.668565    1555 host.go:54] host status for "addons-183000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/monitor: connect: connection refused
	I0925 03:34:26.674660    1555 addons.go:231] Setting addon default-storageclass=true in "addons-183000"
	W0925 03:34:26.679882    1555 addons.go:277] "addons-183000" is not running, setting ingress-dns=true and skipping enablement (err=<nil>)
	W0925 03:34:26.679903    1555 addons.go:277] "addons-183000" is not running, setting csi-hostpath-driver=true and skipping enablement (err=<nil>)
	W0925 03:34:26.679909    1555 addons.go:277] "addons-183000" is not running, setting cloud-spanner=true and skipping enablement (err=<nil>)
	W0925 03:34:26.679910    1555 addons.go:277] "addons-183000" is not running, setting metrics-server=true and skipping enablement (err=<nil>)
	W0925 03:34:26.679914    1555 addons.go:277] "addons-183000" is not running, setting registry=true and skipping enablement (err=<nil>)
	I0925 03:34:26.680408    1555 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0925 03:34:26.680587    1555 host.go:66] Checking if "addons-183000" exists ...
	I0925 03:34:26.685884    1555 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-183000"
	I0925 03:34:26.691851    1555 out.go:177] * Verifying csi-hostpath-driver addon...
	I0925 03:34:26.685950    1555 addons.go:467] Verifying addon metrics-server=true in "addons-183000"
	I0925 03:34:26.685956    1555 addons.go:467] Verifying addon registry=true in "addons-183000"
	I0925 03:34:26.685976    1555 host.go:66] Checking if "addons-183000" exists ...
	I0925 03:34:26.685980    1555 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.20.0
	I0925 03:34:26.693878    1555 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0925 03:34:26.696802    1555 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-183000" context rescaled to 1 replicas
	I0925 03:34:26.698859    1555 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 03:34:26.700109    1555 out.go:177] * Verifying Kubernetes components...
	I0925 03:34:26.699453    1555 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0925 03:34:26.699742    1555 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0925 03:34:26.709918    1555 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0925 03:34:26.713912    1555 out.go:177] * Verifying registry addon...
	I0925 03:34:26.717867    1555 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0925 03:34:26.720802    1555 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/id_rsa Username:docker}
	I0925 03:34:26.717891    1555 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0925 03:34:26.720819    1555 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0925 03:34:26.720825    1555 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/id_rsa Username:docker}
	I0925 03:34:26.721266    1555 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0925 03:34:26.726699    1555 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=registry
	I0925 03:34:26.728776    1555 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0925 03:34:26.751434    1555 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0925 03:34:26.751798    1555 node_ready.go:35] waiting up to 6m0s for node "addons-183000" to be "Ready" ...
	I0925 03:34:26.753298    1555 node_ready.go:49] node "addons-183000" has status "Ready":"True"
	I0925 03:34:26.753320    1555 node_ready.go:38] duration metric: took 1.500542ms waiting for node "addons-183000" to be "Ready" ...
	I0925 03:34:26.753326    1555 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0925 03:34:26.756603    1555 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-nj9v5" in "kube-system" namespace to be "Ready" ...
	I0925 03:34:26.894346    1555 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0925 03:34:26.894357    1555 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0925 03:34:26.894362    1555 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0925 03:34:26.913613    1555 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I0925 03:34:26.913623    1555 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0925 03:34:26.955544    1555 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0925 03:34:26.955558    1555 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0925 03:34:26.966254    1555 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0925 03:34:26.966263    1555 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0925 03:34:26.970978    1555 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0925 03:34:26.970984    1555 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0925 03:34:26.980045    1555 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I0925 03:34:26.980056    1555 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0925 03:34:27.011877    1555 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0925 03:34:27.011886    1555 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7741 bytes)
	I0925 03:34:27.035496    1555 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0925 03:34:27.284243    1555 start.go:923] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I0925 03:34:28.770683    1555 pod_ready.go:102] pod "coredns-5dd5756b68-nj9v5" in "kube-system" namespace has status "Ready":"False"
	I0925 03:34:30.771066    1555 pod_ready.go:102] pod "coredns-5dd5756b68-nj9v5" in "kube-system" namespace has status "Ready":"False"
	I0925 03:34:33.271406    1555 pod_ready.go:102] pod "coredns-5dd5756b68-nj9v5" in "kube-system" namespace has status "Ready":"False"
	I0925 03:34:33.290034    1555 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0925 03:34:33.290047    1555 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/id_rsa Username:docker}
	I0925 03:34:33.333376    1555 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0925 03:34:33.340520    1555 addons.go:231] Setting addon gcp-auth=true in "addons-183000"
	I0925 03:34:33.340540    1555 host.go:66] Checking if "addons-183000" exists ...
	I0925 03:34:33.341291    1555 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0925 03:34:33.341299    1555 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/id_rsa Username:docker}
	I0925 03:34:33.385047    1555 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0925 03:34:33.390017    1555 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0925 03:34:33.393078    1555 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0925 03:34:33.393083    1555 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0925 03:34:33.401443    1555 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0925 03:34:33.401449    1555 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0925 03:34:33.408814    1555 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0925 03:34:33.408821    1555 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5412 bytes)
	I0925 03:34:33.415868    1555 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0925 03:34:33.956480    1555 addons.go:467] Verifying addon gcp-auth=true in "addons-183000"
	I0925 03:34:33.962940    1555 out.go:177] * Verifying gcp-auth addon...
	I0925 03:34:33.970267    1555 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0925 03:34:33.972814    1555 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0925 03:34:33.972821    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:33.975859    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:34.479146    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:34.978976    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:35.477962    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:35.770777    1555 pod_ready.go:102] pod "coredns-5dd5756b68-nj9v5" in "kube-system" namespace has status "Ready":"False"
	I0925 03:34:35.978841    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:36.478564    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:36.978738    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:37.478896    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:37.978838    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:38.273778    1555 pod_ready.go:102] pod "coredns-5dd5756b68-nj9v5" in "kube-system" namespace has status "Ready":"False"
	I0925 03:34:38.478811    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:38.770881    1555 pod_ready.go:92] pod "coredns-5dd5756b68-nj9v5" in "kube-system" namespace has status "Ready":"True"
	I0925 03:34:38.770889    1555 pod_ready.go:81] duration metric: took 12.014493833s waiting for pod "coredns-5dd5756b68-nj9v5" in "kube-system" namespace to be "Ready" ...
	I0925 03:34:38.770893    1555 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-183000" in "kube-system" namespace to be "Ready" ...
	I0925 03:34:38.773593    1555 pod_ready.go:92] pod "etcd-addons-183000" in "kube-system" namespace has status "Ready":"True"
	I0925 03:34:38.773599    1555 pod_ready.go:81] duration metric: took 2.702459ms waiting for pod "etcd-addons-183000" in "kube-system" namespace to be "Ready" ...
	I0925 03:34:38.773602    1555 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-183000" in "kube-system" namespace to be "Ready" ...
	I0925 03:34:38.775799    1555 pod_ready.go:92] pod "kube-apiserver-addons-183000" in "kube-system" namespace has status "Ready":"True"
	I0925 03:34:38.775804    1555 pod_ready.go:81] duration metric: took 2.198875ms waiting for pod "kube-apiserver-addons-183000" in "kube-system" namespace to be "Ready" ...
	I0925 03:34:38.775808    1555 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-183000" in "kube-system" namespace to be "Ready" ...
	I0925 03:34:38.777922    1555 pod_ready.go:92] pod "kube-controller-manager-addons-183000" in "kube-system" namespace has status "Ready":"True"
	I0925 03:34:38.777929    1555 pod_ready.go:81] duration metric: took 2.118625ms waiting for pod "kube-controller-manager-addons-183000" in "kube-system" namespace to be "Ready" ...
	I0925 03:34:38.777933    1555 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7t7bh" in "kube-system" namespace to be "Ready" ...
	I0925 03:34:38.780129    1555 pod_ready.go:92] pod "kube-proxy-7t7bh" in "kube-system" namespace has status "Ready":"True"
	I0925 03:34:38.780136    1555 pod_ready.go:81] duration metric: took 2.199875ms waiting for pod "kube-proxy-7t7bh" in "kube-system" namespace to be "Ready" ...
	I0925 03:34:38.780139    1555 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-183000" in "kube-system" namespace to be "Ready" ...
	I0925 03:34:38.977389    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:39.173086    1555 pod_ready.go:92] pod "kube-scheduler-addons-183000" in "kube-system" namespace has status "Ready":"True"
	I0925 03:34:39.173096    1555 pod_ready.go:81] duration metric: took 392.960166ms waiting for pod "kube-scheduler-addons-183000" in "kube-system" namespace to be "Ready" ...
	I0925 03:34:39.173100    1555 pod_ready.go:38] duration metric: took 12.419997458s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0925 03:34:39.173111    1555 api_server.go:52] waiting for apiserver process to appear ...
	I0925 03:34:39.173181    1555 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0925 03:34:39.178068    1555 api_server.go:72] duration metric: took 12.479424625s to wait for apiserver process to appear ...
	I0925 03:34:39.178075    1555 api_server.go:88] waiting for apiserver healthz status ...
	I0925 03:34:39.178081    1555 api_server.go:253] Checking apiserver healthz at https://192.168.105.2:8443/healthz ...
	I0925 03:34:39.182471    1555 api_server.go:279] https://192.168.105.2:8443/healthz returned 200:
	ok
	I0925 03:34:39.183204    1555 api_server.go:141] control plane version: v1.28.2
	I0925 03:34:39.183210    1555 api_server.go:131] duration metric: took 5.132042ms to wait for apiserver health ...
	I0925 03:34:39.183213    1555 system_pods.go:43] waiting for kube-system pods to appear ...
	I0925 03:34:39.372354    1555 system_pods.go:59] 6 kube-system pods found
	I0925 03:34:39.372365    1555 system_pods.go:61] "coredns-5dd5756b68-nj9v5" [b1bb0e62-0339-479f-9572-1e07ab015a1d] Running
	I0925 03:34:39.372368    1555 system_pods.go:61] "etcd-addons-183000" [98901ac9-8165-4fad-b6a6-6c757da8e783] Running
	I0925 03:34:39.372371    1555 system_pods.go:61] "kube-apiserver-addons-183000" [b3899bc1-2055-47fb-aded-8cc3e5ca8b22] Running
	I0925 03:34:39.372373    1555 system_pods.go:61] "kube-controller-manager-addons-183000" [12803b97-0e90-4869-a114-2dce351af701] Running
	I0925 03:34:39.372376    1555 system_pods.go:61] "kube-proxy-7t7bh" [b51c70db-a512-4aae-af91-8b45e6ce9f89] Running
	I0925 03:34:39.372378    1555 system_pods.go:61] "kube-scheduler-addons-183000" [543428f6-b6ce-448c-9d3e-48c775396c75] Running
	I0925 03:34:39.372382    1555 system_pods.go:74] duration metric: took 189.166917ms to wait for pod list to return data ...
	I0925 03:34:39.372386    1555 default_sa.go:34] waiting for default service account to be created ...
	I0925 03:34:39.478483    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:39.569942    1555 default_sa.go:45] found service account: "default"
	I0925 03:34:39.569952    1555 default_sa.go:55] duration metric: took 197.566292ms for default service account to be created ...
	I0925 03:34:39.569955    1555 system_pods.go:116] waiting for k8s-apps to be running ...
	I0925 03:34:39.771555    1555 system_pods.go:86] 6 kube-system pods found
	I0925 03:34:39.771566    1555 system_pods.go:89] "coredns-5dd5756b68-nj9v5" [b1bb0e62-0339-479f-9572-1e07ab015a1d] Running
	I0925 03:34:39.771569    1555 system_pods.go:89] "etcd-addons-183000" [98901ac9-8165-4fad-b6a6-6c757da8e783] Running
	I0925 03:34:39.771571    1555 system_pods.go:89] "kube-apiserver-addons-183000" [b3899bc1-2055-47fb-aded-8cc3e5ca8b22] Running
	I0925 03:34:39.771573    1555 system_pods.go:89] "kube-controller-manager-addons-183000" [12803b97-0e90-4869-a114-2dce351af701] Running
	I0925 03:34:39.771576    1555 system_pods.go:89] "kube-proxy-7t7bh" [b51c70db-a512-4aae-af91-8b45e6ce9f89] Running
	I0925 03:34:39.771579    1555 system_pods.go:89] "kube-scheduler-addons-183000" [543428f6-b6ce-448c-9d3e-48c775396c75] Running
	I0925 03:34:39.771582    1555 system_pods.go:126] duration metric: took 201.627792ms to wait for k8s-apps to be running ...
	I0925 03:34:39.771585    1555 system_svc.go:44] waiting for kubelet service to be running ....
	I0925 03:34:39.771649    1555 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0925 03:34:39.777059    1555 system_svc.go:56] duration metric: took 5.471834ms WaitForService to wait for kubelet.
	I0925 03:34:39.777072    1555 kubeadm.go:581] duration metric: took 13.078440792s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0925 03:34:39.777081    1555 node_conditions.go:102] verifying NodePressure condition ...
	I0925 03:34:39.970496    1555 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0925 03:34:39.970507    1555 node_conditions.go:123] node cpu capacity is 2
	I0925 03:34:39.970512    1555 node_conditions.go:105] duration metric: took 193.43225ms to run NodePressure ...
	I0925 03:34:39.970518    1555 start.go:228] waiting for startup goroutines ...
	I0925 03:34:39.977869    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:40.478718    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:40.978494    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:41.478330    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:41.978723    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:42.478484    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:42.978499    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:43.478310    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:43.978560    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:44.478626    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:44.978747    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:45.478652    1555 kapi.go:107] duration metric: took 11.508592542s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0925 03:34:45.482917    1555 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-183000 cluster.
	I0925 03:34:45.486908    1555 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0925 03:34:45.489839    1555 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0925 03:40:26.681420    1555 kapi.go:107] duration metric: took 6m0.007630792s to wait for app.kubernetes.io/name=ingress-nginx ...
	W0925 03:40:26.681519    1555 out.go:239] ! Enabling 'ingress' returned an error: running callbacks: [waiting for app.kubernetes.io/name=ingress-nginx pods: context deadline exceeded]
	I0925 03:40:26.713271    1555 kapi.go:107] duration metric: took 6m0.020443166s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	W0925 03:40:26.713301    1555 out.go:239] ! Enabling 'csi-hostpath-driver' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=csi-hostpath-driver pods: context deadline exceeded]
	I0925 03:40:26.715027    1555 kapi.go:107] duration metric: took 6m0.000386167s to wait for kubernetes.io/minikube-addons=registry ...
	W0925 03:40:26.715058    1555 out.go:239] ! Enabling 'registry' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=registry pods: context deadline exceeded]
	I0925 03:40:26.720408    1555 out.go:177] * Enabled addons: volumesnapshots, storage-provisioner, cloud-spanner, ingress-dns, metrics-server, default-storageclass, inspektor-gadget, gcp-auth
	I0925 03:40:26.729284    1555 addons.go:502] enable addons completed in 6m0.068199458s: enabled=[volumesnapshots storage-provisioner cloud-spanner ingress-dns metrics-server default-storageclass inspektor-gadget gcp-auth]
	I0925 03:40:26.729295    1555 start.go:233] waiting for cluster config update ...
	I0925 03:40:26.729300    1555 start.go:242] writing updated cluster config ...
	I0925 03:40:26.729761    1555 ssh_runner.go:195] Run: rm -f paused
	I0925 03:40:26.760421    1555 start.go:600] kubectl: 1.27.2, cluster: 1.28.2 (minor skew: 1)
	I0925 03:40:26.764251    1555 out.go:177] * Done! kubectl is now configured to use "addons-183000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Mon 2023-09-25 10:33:55 UTC, ends at Mon 2023-09-25 11:04:40 UTC. --
	Sep 25 10:34:42 addons-183000 dockerd[1111]: time="2023-09-25T10:34:42.358246450Z" level=info msg="shim disconnected" id=d009b921a4cc83c6746a6427d33a20b5315cc03832a52dae5f1cc5bda62fc19b namespace=moby
	Sep 25 10:34:42 addons-183000 dockerd[1111]: time="2023-09-25T10:34:42.358270381Z" level=warning msg="cleaning up after shim disconnected" id=d009b921a4cc83c6746a6427d33a20b5315cc03832a52dae5f1cc5bda62fc19b namespace=moby
	Sep 25 10:34:42 addons-183000 dockerd[1111]: time="2023-09-25T10:34:42.358274585Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 25 10:34:42 addons-183000 dockerd[1111]: time="2023-09-25T10:34:42.372096465Z" level=warning msg="cleanup warnings time=\"2023-09-25T10:34:42Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Sep 25 10:34:42 addons-183000 dockerd[1105]: time="2023-09-25T10:34:42.400036385Z" level=warning msg="reference for unknown type: " digest="sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf" remote="gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf"
	Sep 25 10:34:42 addons-183000 dockerd[1105]: time="2023-09-25T10:34:42.404130643Z" level=info msg="ignoring event" container=d09446869187232df599a79609b2cc6878507cb6c4070aec9a79632485a47117 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 25 10:34:42 addons-183000 dockerd[1111]: time="2023-09-25T10:34:42.404287046Z" level=info msg="shim disconnected" id=d09446869187232df599a79609b2cc6878507cb6c4070aec9a79632485a47117 namespace=moby
	Sep 25 10:34:42 addons-183000 dockerd[1111]: time="2023-09-25T10:34:42.404320674Z" level=warning msg="cleaning up after shim disconnected" id=d09446869187232df599a79609b2cc6878507cb6c4070aec9a79632485a47117 namespace=moby
	Sep 25 10:34:42 addons-183000 dockerd[1111]: time="2023-09-25T10:34:42.404325086Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 25 10:34:44 addons-183000 cri-dockerd[998]: time="2023-09-25T10:34:44Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf: Status: Downloaded newer image for gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf"
	Sep 25 10:34:44 addons-183000 dockerd[1111]: time="2023-09-25T10:34:44.321936694Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 25 10:34:44 addons-183000 dockerd[1111]: time="2023-09-25T10:34:44.321971829Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 25 10:34:44 addons-183000 dockerd[1111]: time="2023-09-25T10:34:44.321982528Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 25 10:34:44 addons-183000 dockerd[1111]: time="2023-09-25T10:34:44.321989314Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 25 10:52:28 addons-183000 dockerd[1111]: time="2023-09-25T10:52:28.470595101Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 25 10:52:28 addons-183000 dockerd[1111]: time="2023-09-25T10:52:28.470647350Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 25 10:52:28 addons-183000 dockerd[1111]: time="2023-09-25T10:52:28.470663058Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 25 10:52:28 addons-183000 dockerd[1111]: time="2023-09-25T10:52:28.470673850Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 25 10:52:28 addons-183000 cri-dockerd[998]: time="2023-09-25T10:52:28Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/df99a16ef61333f49304447de1f31c9677e9243b43dae14dfba57e8a2aeeb1be/resolv.conf as [nameserver 10.96.0.10 search headlamp.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Sep 25 10:52:28 addons-183000 dockerd[1105]: time="2023-09-25T10:52:28.813334559Z" level=warning msg="reference for unknown type: " digest="sha256:bb15916c96306cd14f1c9c09c639d01d1d1fb854fd770bf99f3e7a9deb584753" remote="ghcr.io/headlamp-k8s/headlamp@sha256:bb15916c96306cd14f1c9c09c639d01d1d1fb854fd770bf99f3e7a9deb584753"
	Sep 25 10:52:33 addons-183000 cri-dockerd[998]: time="2023-09-25T10:52:33Z" level=info msg="Stop pulling image ghcr.io/headlamp-k8s/headlamp:v0.19.1@sha256:bb15916c96306cd14f1c9c09c639d01d1d1fb854fd770bf99f3e7a9deb584753: Status: Downloaded newer image for ghcr.io/headlamp-k8s/headlamp@sha256:bb15916c96306cd14f1c9c09c639d01d1d1fb854fd770bf99f3e7a9deb584753"
	Sep 25 10:52:34 addons-183000 dockerd[1111]: time="2023-09-25T10:52:34.018791826Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 25 10:52:34 addons-183000 dockerd[1111]: time="2023-09-25T10:52:34.018844700Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 25 10:52:34 addons-183000 dockerd[1111]: time="2023-09-25T10:52:34.018856825Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 25 10:52:34 addons-183000 dockerd[1111]: time="2023-09-25T10:52:34.018863408Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d5793dcd01c69       ghcr.io/headlamp-k8s/headlamp@sha256:bb15916c96306cd14f1c9c09c639d01d1d1fb854fd770bf99f3e7a9deb584753               12 minutes ago      Running             headlamp                  0                   df99a16ef6133       headlamp-58b88cff49-kdgv2
	f0ceeef2fd99f       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf        29 minutes ago      Running             gcp-auth                  0                   217fc96b3ae84       gcp-auth-d4c87556c-fgkgk
	3214d7d3645b3       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:01b7311f9512411ef6530e09dbdd3aeaea0abc4101227dbead4d44c36b255ca7   30 minutes ago      Running             gadget                    0                   1f38ec635c03d       gadget-dmqnx
	09ae8580d310e       97e04611ad434                                                                                                       30 minutes ago      Running             coredns                   0                   9802832060d13       coredns-5dd5756b68-nj9v5
	fff72387d957b       7da62c127fc0f                                                                                                       30 minutes ago      Running             kube-proxy                0                   2514b88f9fbec       kube-proxy-7t7bh
	e24563a552742       89d57b83c1786                                                                                                       30 minutes ago      Running             kube-controller-manager   0                   7170972f2383c       kube-controller-manager-addons-183000
	e38f0c6d58f79       30bb499447fe1                                                                                                       30 minutes ago      Running             kube-apiserver            0                   e3ec8dad501d8       kube-apiserver-addons-183000
	202a7fdac8250       9cdd6470f48c8                                                                                                       30 minutes ago      Running             etcd                      0                   f07db97eda3c5       etcd-addons-183000
	5a87dfcd0e1a4       64fc40cee3716                                                                                                       30 minutes ago      Running             kube-scheduler            0                   88f62df9ef878       kube-scheduler-addons-183000
	
	* 
	* ==> coredns [09ae8580d310] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:53855 - 12762 "HINFO IN 6175233926506353361.1980247959579836404. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.004134462s
	[INFO] 10.244.0.5:53045 - 37584 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000106198s
	[INFO] 10.244.0.5:58309 - 60928 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000170558s
	[INFO] 10.244.0.5:51843 - 23622 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000213104s
	[INFO] 10.244.0.5:42760 - 58990 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000042504s
	[INFO] 10.244.0.5:51340 - 46119 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00004929s
	[INFO] 10.244.0.5:39848 - 8379 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000023105s
	[INFO] 10.244.0.5:32887 - 31577 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001136668s
	[INFO] 10.244.0.5:49269 - 43084 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 458 0.001085546s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-183000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-183000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1bf6c3d5317028f348e55ea19d261973a6487d3c
	                    minikube.k8s.io/name=addons-183000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_25T03_34_13_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 25 Sep 2023 10:34:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-183000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 25 Sep 2023 11:04:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 25 Sep 2023 11:02:55 +0000   Mon, 25 Sep 2023 10:34:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 25 Sep 2023 11:02:55 +0000   Mon, 25 Sep 2023 10:34:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 25 Sep 2023 11:02:55 +0000   Mon, 25 Sep 2023 10:34:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 25 Sep 2023 11:02:55 +0000   Mon, 25 Sep 2023 10:34:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.2
	  Hostname:    addons-183000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	System Info:
	  Machine ID:                 3ec93b0c295a46b69f667e92919bae36
	  System UUID:                3ec93b0c295a46b69f667e92919bae36
	  Boot ID:                    e140f335-14d6-4d36-af6f-4c16a72ee860
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.6
	  Kubelet Version:            v1.28.2
	  Kube-Proxy Version:         v1.28.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  gadget                      gadget-dmqnx                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         30m
	  gcp-auth                    gcp-auth-d4c87556c-fgkgk                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         30m
	  headlamp                    headlamp-58b88cff49-kdgv2                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 coredns-5dd5756b68-nj9v5                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     30m
	  kube-system                 etcd-addons-183000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         30m
	  kube-system                 kube-apiserver-addons-183000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         30m
	  kube-system                 kube-controller-manager-addons-183000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         30m
	  kube-system                 kube-proxy-7t7bh                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         30m
	  kube-system                 kube-scheduler-addons-183000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         30m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 30m   kube-proxy       
	  Normal  Starting                 30m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  30m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  30m   kubelet          Node addons-183000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    30m   kubelet          Node addons-183000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     30m   kubelet          Node addons-183000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                30m   kubelet          Node addons-183000 status is now: NodeReady
	  Normal  RegisteredNode           30m   node-controller  Node addons-183000 event: Registered Node addons-183000 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.641440] EINJ: EINJ table not found.
	[  +0.489201] systemd-fstab-generator[117]: Ignoring "noauto" for root device
	[  +0.043090] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000792] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +3.110509] systemd-fstab-generator[482]: Ignoring "noauto" for root device
	[  +0.074666] systemd-fstab-generator[494]: Ignoring "noauto" for root device
	[  +0.418795] systemd-fstab-generator[667]: Ignoring "noauto" for root device
	[  +0.183648] systemd-fstab-generator[704]: Ignoring "noauto" for root device
	[  +0.073331] systemd-fstab-generator[715]: Ignoring "noauto" for root device
	[  +0.088908] systemd-fstab-generator[728]: Ignoring "noauto" for root device
	[  +1.149460] kauditd_printk_skb: 17 callbacks suppressed
	[  +0.104006] systemd-fstab-generator[917]: Ignoring "noauto" for root device
	[  +0.078468] systemd-fstab-generator[928]: Ignoring "noauto" for root device
	[  +0.058376] systemd-fstab-generator[939]: Ignoring "noauto" for root device
	[  +0.070842] systemd-fstab-generator[950]: Ignoring "noauto" for root device
	[  +0.085054] systemd-fstab-generator[991]: Ignoring "noauto" for root device
	[Sep25 10:34] systemd-fstab-generator[1098]: Ignoring "noauto" for root device
	[  +2.191489] kauditd_printk_skb: 29 callbacks suppressed
	[  +2.399489] systemd-fstab-generator[1471]: Ignoring "noauto" for root device
	[  +5.122490] systemd-fstab-generator[2347]: Ignoring "noauto" for root device
	[ +14.463207] kauditd_printk_skb: 41 callbacks suppressed
	[  +6.798894] kauditd_printk_skb: 21 callbacks suppressed
	[  +4.810513] TCP: eth0: Driver has suspect GRO implementation, TCP performance may be compromised.
	[  +3.498700] kauditd_printk_skb: 12 callbacks suppressed
	[Sep25 10:52] kauditd_printk_skb: 5 callbacks suppressed
	
	* 
	* ==> etcd [202a7fdac825] <==
	* {"level":"info","ts":"2023-09-25T10:34:31.937317Z","caller":"traceutil/trace.go:171","msg":"trace[667548922] transaction","detail":"{read_only:false; response_revision:416; number_of_response:1; }","duration":"126.937574ms","start":"2023-09-25T10:34:31.810371Z","end":"2023-09-25T10:34:31.937309Z","steps":["trace[667548922] 'process raft request'  (duration: 126.824018ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-25T10:34:36.99882Z","caller":"traceutil/trace.go:171","msg":"trace[243510449] linearizableReadLoop","detail":"{readStateIndex:482; appliedIndex:481; }","duration":"165.982552ms","start":"2023-09-25T10:34:36.832829Z","end":"2023-09-25T10:34:36.998811Z","steps":["trace[243510449] 'read index received'  (duration: 165.770762ms)","trace[243510449] 'applied index is now lower than readState.Index'  (duration: 211.209µs)"],"step_count":2}
	{"level":"warn","ts":"2023-09-25T10:34:36.998969Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"166.151453ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2023-09-25T10:34:36.999019Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"123.796797ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-5dd5756b68-nj9v5\" ","response":"range_response_count:1 size:5002"}
	{"level":"info","ts":"2023-09-25T10:34:36.999045Z","caller":"traceutil/trace.go:171","msg":"trace[2057756314] range","detail":"{range_begin:/registry/pods/kube-system/coredns-5dd5756b68-nj9v5; range_end:; response_count:1; response_revision:469; }","duration":"123.811177ms","start":"2023-09-25T10:34:36.875219Z","end":"2023-09-25T10:34:36.99903Z","steps":["trace[2057756314] 'agreement among raft nodes before linearized reading'  (duration: 123.788776ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-25T10:34:36.999164Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"162.803156ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:6 size:31393"}
	{"level":"info","ts":"2023-09-25T10:34:36.999205Z","caller":"traceutil/trace.go:171","msg":"trace[1483278895] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:6; response_revision:469; }","duration":"162.834825ms","start":"2023-09-25T10:34:36.836356Z","end":"2023-09-25T10:34:36.99919Z","steps":["trace[1483278895] 'agreement among raft nodes before linearized reading'  (duration: 162.701625ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-25T10:34:36.999Z","caller":"traceutil/trace.go:171","msg":"trace[3634572] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:0; response_revision:469; }","duration":"166.183579ms","start":"2023-09-25T10:34:36.832812Z","end":"2023-09-25T10:34:36.998995Z","steps":["trace[3634572] 'agreement among raft nodes before linearized reading'  (duration: 166.053912ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-25T10:34:36.998947Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"163.574471ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:6 size:31393"}
	{"level":"info","ts":"2023-09-25T10:34:36.999285Z","caller":"traceutil/trace.go:171","msg":"trace[819315326] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:6; response_revision:469; }","duration":"163.922954ms","start":"2023-09-25T10:34:36.83536Z","end":"2023-09-25T10:34:36.999283Z","steps":["trace[819315326] 'agreement among raft nodes before linearized reading'  (duration: 163.541307ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-25T10:44:09.779305Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":608}
	{"level":"info","ts":"2023-09-25T10:44:09.779775Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":608,"took":"346.08µs","hash":977468107}
	{"level":"info","ts":"2023-09-25T10:44:09.779794Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":977468107,"revision":608,"compact-revision":-1}
	{"level":"info","ts":"2023-09-25T10:49:09.783821Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":698}
	{"level":"info","ts":"2023-09-25T10:49:09.784257Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":698,"took":"244.664µs","hash":3592134345}
	{"level":"info","ts":"2023-09-25T10:49:09.784273Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3592134345,"revision":698,"compact-revision":608}
	{"level":"info","ts":"2023-09-25T10:54:09.786046Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":788}
	{"level":"info","ts":"2023-09-25T10:54:09.786391Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":788,"took":"183.456µs","hash":2921777324}
	{"level":"info","ts":"2023-09-25T10:54:09.786401Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2921777324,"revision":788,"compact-revision":698}
	{"level":"info","ts":"2023-09-25T10:59:09.78846Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":921}
	{"level":"info","ts":"2023-09-25T10:59:09.788929Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":921,"took":"220.122µs","hash":3368749381}
	{"level":"info","ts":"2023-09-25T10:59:09.788942Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3368749381,"revision":921,"compact-revision":788}
	{"level":"info","ts":"2023-09-25T11:04:09.790636Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1015}
	{"level":"info","ts":"2023-09-25T11:04:09.790979Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1015,"took":"207.581µs","hash":1495489543}
	{"level":"info","ts":"2023-09-25T11:04:09.790991Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1495489543,"revision":1015,"compact-revision":921}
	
	* 
	* ==> gcp-auth [f0ceeef2fd99] <==
	* 2023/09/25 10:34:44 GCP Auth Webhook started!
	2023/09/25 10:52:28 Ready to marshal response ...
	2023/09/25 10:52:28 Ready to write response ...
	2023/09/25 10:52:28 Ready to marshal response ...
	2023/09/25 10:52:28 Ready to write response ...
	2023/09/25 10:52:28 Ready to marshal response ...
	2023/09/25 10:52:28 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  11:04:40 up 30 min,  0 users,  load average: 0.02, 0.08, 0.09
	Linux addons-183000 5.10.57 #1 SMP PREEMPT Mon Sep 18 20:10:16 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [e38f0c6d58f7] <==
	* I0925 10:34:11.347033       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0925 10:34:11.348323       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0925 10:34:11.348329       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0925 10:34:11.481757       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0925 10:34:11.494186       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0925 10:34:11.536039       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0925 10:34:11.538110       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.105.2]
	I0925 10:34:11.538484       1 controller.go:624] quota admission added evaluator for: endpoints
	I0925 10:34:11.539858       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0925 10:34:12.380709       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0925 10:34:12.885080       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0925 10:34:12.893075       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0925 10:34:12.904077       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0925 10:34:26.498134       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0925 10:34:26.509156       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I0925 10:34:27.526494       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0925 10:34:34.002889       1 alloc.go:330] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.108.108.100"}
	I0925 10:34:34.022823       1 controller.go:624] quota admission added evaluator for: jobs.batch
	I0925 10:39:10.399639       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0925 10:44:10.399758       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0925 10:49:10.400399       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0925 10:52:28.085866       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.104.63.194"}
	I0925 10:54:10.400553       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0925 10:59:10.400827       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0925 11:04:10.400976       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	
	* 
	* ==> kube-controller-manager [e24563a55274] <==
	* I0925 11:01:26.518602       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found"
	E0925 11:01:41.518847       1 pv_controller.go:1562] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found" PVC="default/hpvc"
	I0925 11:01:41.518891       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found"
	E0925 11:01:56.521590       1 pv_controller.go:1562] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found" PVC="default/hpvc"
	I0925 11:01:56.521735       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found"
	E0925 11:02:11.522240       1 pv_controller.go:1562] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found" PVC="default/hpvc"
	I0925 11:02:11.522280       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found"
	E0925 11:02:26.524033       1 pv_controller.go:1562] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found" PVC="default/hpvc"
	I0925 11:02:26.524124       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found"
	E0925 11:02:41.524053       1 pv_controller.go:1562] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found" PVC="default/hpvc"
	I0925 11:02:41.524091       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found"
	E0925 11:02:56.524734       1 pv_controller.go:1562] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found" PVC="default/hpvc"
	I0925 11:02:56.524819       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found"
	E0925 11:03:11.525104       1 pv_controller.go:1562] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found" PVC="default/hpvc"
	I0925 11:03:11.525169       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found"
	E0925 11:03:26.525484       1 pv_controller.go:1562] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found" PVC="default/hpvc"
	I0925 11:03:26.525561       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found"
	E0925 11:03:41.526494       1 pv_controller.go:1562] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found" PVC="default/hpvc"
	I0925 11:03:41.526512       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found"
	E0925 11:03:56.527456       1 pv_controller.go:1562] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found" PVC="default/hpvc"
	I0925 11:03:56.527614       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found"
	E0925 11:04:11.528390       1 pv_controller.go:1562] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found" PVC="default/hpvc"
	I0925 11:04:11.528457       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found"
	E0925 11:04:26.528938       1 pv_controller.go:1562] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found" PVC="default/hpvc"
	I0925 11:04:26.529035       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found"
	
	* 
	* ==> kube-proxy [fff72387d957] <==
	* I0925 10:34:27.163880       1 server_others.go:69] "Using iptables proxy"
	I0925 10:34:27.181208       1 node.go:141] Successfully retrieved node IP: 192.168.105.2
	I0925 10:34:27.228178       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0925 10:34:27.228201       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0925 10:34:27.231917       1 server_others.go:152] "Using iptables Proxier"
	I0925 10:34:27.231983       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0925 10:34:27.232100       1 server.go:846] "Version info" version="v1.28.2"
	I0925 10:34:27.232211       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0925 10:34:27.232663       1 config.go:188] "Starting service config controller"
	I0925 10:34:27.232700       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0925 10:34:27.232734       1 config.go:97] "Starting endpoint slice config controller"
	I0925 10:34:27.232760       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0925 10:34:27.233047       1 config.go:315] "Starting node config controller"
	I0925 10:34:27.233085       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0925 10:34:27.333424       1 shared_informer.go:318] Caches are synced for node config
	I0925 10:34:27.333462       1 shared_informer.go:318] Caches are synced for service config
	I0925 10:34:27.333490       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [5a87dfcd0e1a] <==
	* W0925 10:34:10.412769       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0925 10:34:10.413000       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0925 10:34:10.412552       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0925 10:34:10.413020       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0925 10:34:10.412572       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0925 10:34:10.413082       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0925 10:34:10.412878       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0925 10:34:10.413107       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0925 10:34:11.233945       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0925 10:34:11.233969       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0925 10:34:11.245555       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0925 10:34:11.245565       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0925 10:34:11.257234       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0925 10:34:11.257245       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0925 10:34:11.305366       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0925 10:34:11.305376       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0925 10:34:11.335532       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0925 10:34:11.335546       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0925 10:34:11.379250       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0925 10:34:11.379349       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0925 10:34:11.401540       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0925 10:34:11.401585       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0925 10:34:11.494359       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0925 10:34:11.494379       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0925 10:34:13.407721       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-09-25 10:33:55 UTC, ends at Mon 2023-09-25 11:04:40 UTC. --
	Sep 25 10:59:12 addons-183000 kubelet[2366]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 25 10:59:12 addons-183000 kubelet[2366]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 25 10:59:12 addons-183000 kubelet[2366]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 25 10:59:12 addons-183000 kubelet[2366]: W0925 10:59:12.969824    2366 machine.go:65] Cannot read vendor id correctly, set empty.
	Sep 25 11:00:12 addons-183000 kubelet[2366]: E0925 11:00:12.960839    2366 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 25 11:00:12 addons-183000 kubelet[2366]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 25 11:00:12 addons-183000 kubelet[2366]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 25 11:00:12 addons-183000 kubelet[2366]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 25 11:01:12 addons-183000 kubelet[2366]: E0925 11:01:12.961317    2366 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 25 11:01:12 addons-183000 kubelet[2366]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 25 11:01:12 addons-183000 kubelet[2366]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 25 11:01:12 addons-183000 kubelet[2366]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 25 11:02:12 addons-183000 kubelet[2366]: E0925 11:02:12.960658    2366 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 25 11:02:12 addons-183000 kubelet[2366]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 25 11:02:12 addons-183000 kubelet[2366]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 25 11:02:12 addons-183000 kubelet[2366]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 25 11:03:12 addons-183000 kubelet[2366]: E0925 11:03:12.961122    2366 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 25 11:03:12 addons-183000 kubelet[2366]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 25 11:03:12 addons-183000 kubelet[2366]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 25 11:03:12 addons-183000 kubelet[2366]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 25 11:04:12 addons-183000 kubelet[2366]: E0925 11:04:12.961050    2366 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 25 11:04:12 addons-183000 kubelet[2366]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 25 11:04:12 addons-183000 kubelet[2366]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 25 11:04:12 addons-183000 kubelet[2366]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 25 11:04:12 addons-183000 kubelet[2366]: W0925 11:04:12.970655    2366 machine.go:65] Cannot read vendor id correctly, set empty.
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p addons-183000 -n addons-183000
helpers_test.go:261: (dbg) Run:  kubectl --context addons-183000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/CSI FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/CSI (720.85s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (817.77s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:831: failed waiting for cloud-spanner-emulator deployment to stabilize: timed out waiting for the condition
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
addons_test.go:833: ***** TestAddons/parallel/CloudSpanner: pod "app=cloud-spanner-emulator" failed to start within 6m0s: context deadline exceeded ****
addons_test.go:833: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p addons-183000 -n addons-183000
addons_test.go:833: TestAddons/parallel/CloudSpanner: showing logs for failed pods as of 2023-09-25 03:52:26.854259 -0700 PDT m=+1146.818520168
addons_test.go:834: failed waiting for app=cloud-spanner-emulator pod: app=cloud-spanner-emulator within 6m0s: context deadline exceeded
addons_test.go:836: (dbg) Run:  out/minikube-darwin-arm64 addons disable cloud-spanner -p addons-183000
addons_test.go:836: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable cloud-spanner -p addons-183000: exit status 10 (1m36.947602875s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE: disable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl delete --force --ignore-not-found -f /etc/kubernetes/addons/deployment.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: the path "/etc/kubernetes/addons/deployment.yaml" does not exist
	]
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:837: failed to disable cloud-spanner addon: args "out/minikube-darwin-arm64 addons disable cloud-spanner -p addons-183000" : exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p addons-183000 -n addons-183000
helpers_test.go:244: <<< TestAddons/parallel/CloudSpanner FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/CloudSpanner]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p addons-183000 logs -n 25
helpers_test.go:252: TestAddons/parallel/CloudSpanner logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-427000 | jenkins | v1.31.2 | 25 Sep 23 03:33 PDT |                     |
	|         | -p download-only-427000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| start   | -o=json --download-only        | download-only-427000 | jenkins | v1.31.2 | 25 Sep 23 03:33 PDT |                     |
	|         | -p download-only-427000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.31.2 | 25 Sep 23 03:33 PDT | 25 Sep 23 03:33 PDT |
	| delete  | -p download-only-427000        | download-only-427000 | jenkins | v1.31.2 | 25 Sep 23 03:33 PDT | 25 Sep 23 03:33 PDT |
	| delete  | -p download-only-427000        | download-only-427000 | jenkins | v1.31.2 | 25 Sep 23 03:33 PDT | 25 Sep 23 03:33 PDT |
	| start   | --download-only -p             | binary-mirror-317000 | jenkins | v1.31.2 | 25 Sep 23 03:33 PDT |                     |
	|         | binary-mirror-317000           |                      |         |         |                     |                     |
	|         | --alsologtostderr              |                      |         |         |                     |                     |
	|         | --binary-mirror                |                      |         |         |                     |                     |
	|         | http://127.0.0.1:49310         |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-317000        | binary-mirror-317000 | jenkins | v1.31.2 | 25 Sep 23 03:33 PDT | 25 Sep 23 03:33 PDT |
	| start   | -p addons-183000               | addons-183000        | jenkins | v1.31.2 | 25 Sep 23 03:33 PDT | 25 Sep 23 03:40 PDT |
	|         | --wait=true --memory=4000      |                      |         |         |                     |                     |
	|         | --alsologtostderr              |                      |         |         |                     |                     |
	|         | --addons=registry              |                      |         |         |                     |                     |
	|         | --addons=metrics-server        |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots       |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver   |                      |         |         |                     |                     |
	|         | --addons=gcp-auth              |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner         |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget      |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|         | --addons=ingress               |                      |         |         |                     |                     |
	|         | --addons=ingress-dns           |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p       | addons-183000        | jenkins | v1.31.2 | 25 Sep 23 03:52 PDT |                     |
	|         | addons-183000                  |                      |         |         |                     |                     |
	| addons  | enable headlamp                | addons-183000        | jenkins | v1.31.2 | 25 Sep 23 03:52 PDT | 25 Sep 23 03:52 PDT |
	|         | -p addons-183000               |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/25 03:33:43
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.21.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0925 03:33:43.113263    1555 out.go:296] Setting OutFile to fd 1 ...
	I0925 03:33:43.113390    1555 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 03:33:43.113393    1555 out.go:309] Setting ErrFile to fd 2...
	I0925 03:33:43.113395    1555 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 03:33:43.113522    1555 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17297-1010/.minikube/bin
	I0925 03:33:43.114539    1555 out.go:303] Setting JSON to false
	I0925 03:33:43.129689    1555 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":198,"bootTime":1695637825,"procs":391,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.6","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0925 03:33:43.129759    1555 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0925 03:33:43.134529    1555 out.go:177] * [addons-183000] minikube v1.31.2 on Darwin 13.6 (arm64)
	I0925 03:33:43.141636    1555 out.go:177]   - MINIKUBE_LOCATION=17297
	I0925 03:33:43.145595    1555 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17297-1010/kubeconfig
	I0925 03:33:43.141675    1555 notify.go:220] Checking for updates...
	I0925 03:33:43.149882    1555 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0925 03:33:43.152528    1555 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 03:33:43.155561    1555 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17297-1010/.minikube
	I0925 03:33:43.158461    1555 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0925 03:33:43.161685    1555 driver.go:373] Setting default libvirt URI to qemu:///system
	I0925 03:33:43.165518    1555 out.go:177] * Using the qemu2 driver based on user configuration
	I0925 03:33:43.170494    1555 start.go:298] selected driver: qemu2
	I0925 03:33:43.170500    1555 start.go:902] validating driver "qemu2" against <nil>
	I0925 03:33:43.170505    1555 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0925 03:33:43.172415    1555 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0925 03:33:43.175485    1555 out.go:177] * Automatically selected the socket_vmnet network
	I0925 03:33:43.178631    1555 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0925 03:33:43.178656    1555 cni.go:84] Creating CNI manager for ""
	I0925 03:33:43.178667    1555 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0925 03:33:43.178671    1555 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0925 03:33:43.178683    1555 start_flags.go:321] config:
	{Name:addons-183000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:addons-183000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID
:0 AutoPauseInterval:1m0s}
	I0925 03:33:43.182821    1555 iso.go:125] acquiring lock: {Name:mkf881a60cf9fd1672567914305ff6f7a4f13809 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 03:33:43.186491    1555 out.go:177] * Starting control plane node addons-183000 in cluster addons-183000
	I0925 03:33:43.194499    1555 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0925 03:33:43.194520    1555 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I0925 03:33:43.194535    1555 cache.go:57] Caching tarball of preloaded images
	I0925 03:33:43.194599    1555 preload.go:174] Found /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0925 03:33:43.194605    1555 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I0925 03:33:43.194819    1555 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/config.json ...
	I0925 03:33:43.194831    1555 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/config.json: {Name:mk49657fba0a0e3293097f9bbbd8574691cb2471 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 03:33:43.195036    1555 start.go:365] acquiring machines lock for addons-183000: {Name:mk918d99818e2cf21e5912bc291ed18d4b442ba6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 03:33:43.195158    1555 start.go:369] acquired machines lock for "addons-183000" in 116.458µs
	I0925 03:33:43.195167    1555 start.go:93] Provisioning new machine with config: &{Name:addons-183000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.2 ClusterName:addons-183000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 03:33:43.195202    1555 start.go:125] createHost starting for "" (driver="qemu2")
	I0925 03:33:43.203570    1555 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0925 03:33:43.526310    1555 start.go:159] libmachine.API.Create for "addons-183000" (driver="qemu2")
	I0925 03:33:43.526360    1555 client.go:168] LocalClient.Create starting
	I0925 03:33:43.526524    1555 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/ca.pem
	I0925 03:33:43.685162    1555 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/cert.pem
	I0925 03:33:43.725069    1555 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17297-1010/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I0925 03:33:44.270899    1555 main.go:141] libmachine: Creating SSH key...
	I0925 03:33:44.356373    1555 main.go:141] libmachine: Creating Disk image...
	I0925 03:33:44.356381    1555 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0925 03:33:44.356565    1555 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/disk.qcow2.raw /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/disk.qcow2
	I0925 03:33:44.389562    1555 main.go:141] libmachine: STDOUT: 
	I0925 03:33:44.389584    1555 main.go:141] libmachine: STDERR: 
	I0925 03:33:44.389658    1555 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/disk.qcow2 +20000M
	I0925 03:33:44.397120    1555 main.go:141] libmachine: STDOUT: Image resized.
	
	I0925 03:33:44.397139    1555 main.go:141] libmachine: STDERR: 
	I0925 03:33:44.397152    1555 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/disk.qcow2
	I0925 03:33:44.397157    1555 main.go:141] libmachine: Starting QEMU VM...
	I0925 03:33:44.397194    1555 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:70:b3:50:3d:bc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/disk.qcow2
	I0925 03:33:44.464471    1555 main.go:141] libmachine: STDOUT: 
	I0925 03:33:44.464499    1555 main.go:141] libmachine: STDERR: 
	I0925 03:33:44.464503    1555 main.go:141] libmachine: Attempt 0
	I0925 03:33:44.464522    1555 main.go:141] libmachine: Searching for 4e:70:b3:50:3d:bc in /var/db/dhcpd_leases ...
	I0925 03:33:46.465678    1555 main.go:141] libmachine: Attempt 1
	I0925 03:33:46.465761    1555 main.go:141] libmachine: Searching for 4e:70:b3:50:3d:bc in /var/db/dhcpd_leases ...
	I0925 03:33:48.467021    1555 main.go:141] libmachine: Attempt 2
	I0925 03:33:48.467061    1555 main.go:141] libmachine: Searching for 4e:70:b3:50:3d:bc in /var/db/dhcpd_leases ...
	I0925 03:33:50.468194    1555 main.go:141] libmachine: Attempt 3
	I0925 03:33:50.468212    1555 main.go:141] libmachine: Searching for 4e:70:b3:50:3d:bc in /var/db/dhcpd_leases ...
	I0925 03:33:52.469241    1555 main.go:141] libmachine: Attempt 4
	I0925 03:33:52.469258    1555 main.go:141] libmachine: Searching for 4e:70:b3:50:3d:bc in /var/db/dhcpd_leases ...
	I0925 03:33:54.470316    1555 main.go:141] libmachine: Attempt 5
	I0925 03:33:54.470352    1555 main.go:141] libmachine: Searching for 4e:70:b3:50:3d:bc in /var/db/dhcpd_leases ...
	I0925 03:33:56.471428    1555 main.go:141] libmachine: Attempt 6
	I0925 03:33:56.471461    1555 main.go:141] libmachine: Searching for 4e:70:b3:50:3d:bc in /var/db/dhcpd_leases ...
	I0925 03:33:56.471625    1555 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0925 03:33:56.471679    1555 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:4e:70:b3:50:3d:bc ID:1,4e:70:b3:50:3d:bc Lease:0x6512b393}
	I0925 03:33:56.471685    1555 main.go:141] libmachine: Found match: 4e:70:b3:50:3d:bc
	I0925 03:33:56.471705    1555 main.go:141] libmachine: IP: 192.168.105.2
	I0925 03:33:56.471714    1555 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.2)...
	I0925 03:33:57.476002    1555 machine.go:88] provisioning docker machine ...
	I0925 03:33:57.476029    1555 buildroot.go:166] provisioning hostname "addons-183000"
	I0925 03:33:57.476399    1555 main.go:141] libmachine: Using SSH client type: native
	I0925 03:33:57.476656    1555 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100de8760] 0x100deaed0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0925 03:33:57.476663    1555 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-183000 && echo "addons-183000" | sudo tee /etc/hostname
	I0925 03:33:57.549226    1555 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-183000
	
	I0925 03:33:57.549294    1555 main.go:141] libmachine: Using SSH client type: native
	I0925 03:33:57.549565    1555 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100de8760] 0x100deaed0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0925 03:33:57.549580    1555 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-183000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-183000/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-183000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0925 03:33:57.619664    1555 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0925 03:33:57.619678    1555 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17297-1010/.minikube CaCertPath:/Users/jenkins/minikube-integration/17297-1010/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17297-1010/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17297-1010/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17297-1010/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17297-1010/.minikube}
	I0925 03:33:57.619692    1555 buildroot.go:174] setting up certificates
	I0925 03:33:57.619698    1555 provision.go:83] configureAuth start
	I0925 03:33:57.619702    1555 provision.go:138] copyHostCerts
	I0925 03:33:57.619800    1555 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17297-1010/.minikube/key.pem (1679 bytes)
	I0925 03:33:57.620015    1555 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17297-1010/.minikube/ca.pem (1082 bytes)
	I0925 03:33:57.620106    1555 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17297-1010/.minikube/cert.pem (1123 bytes)
	I0925 03:33:57.620180    1555 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17297-1010/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17297-1010/.minikube/certs/ca-key.pem org=jenkins.addons-183000 san=[192.168.105.2 192.168.105.2 localhost 127.0.0.1 minikube addons-183000]
	I0925 03:33:57.680529    1555 provision.go:172] copyRemoteCerts
	I0925 03:33:57.680584    1555 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0925 03:33:57.680600    1555 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/id_rsa Username:docker}
	I0925 03:33:57.716693    1555 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0925 03:33:57.724070    1555 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0925 03:33:57.731348    1555 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0925 03:33:57.738044    1555 provision.go:86] duration metric: configureAuth took 118.340875ms
	I0925 03:33:57.738067    1555 buildroot.go:189] setting minikube options for container-runtime
	I0925 03:33:57.738181    1555 config.go:182] Loaded profile config "addons-183000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0925 03:33:57.738225    1555 main.go:141] libmachine: Using SSH client type: native
	I0925 03:33:57.738442    1555 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100de8760] 0x100deaed0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0925 03:33:57.738446    1555 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0925 03:33:57.806528    1555 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0925 03:33:57.806536    1555 buildroot.go:70] root file system type: tmpfs
	I0925 03:33:57.806591    1555 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0925 03:33:57.806639    1555 main.go:141] libmachine: Using SSH client type: native
	I0925 03:33:57.806901    1555 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100de8760] 0x100deaed0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0925 03:33:57.806939    1555 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0925 03:33:57.879305    1555 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0925 03:33:57.879349    1555 main.go:141] libmachine: Using SSH client type: native
	I0925 03:33:57.879600    1555 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100de8760] 0x100deaed0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0925 03:33:57.879612    1555 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0925 03:33:58.218156    1555 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0925 03:33:58.218176    1555 machine.go:91] provisioned docker machine in 742.178459ms
	I0925 03:33:58.218184    1555 client.go:171] LocalClient.Create took 14.692090292s
	I0925 03:33:58.218196    1555 start.go:167] duration metric: libmachine.API.Create for "addons-183000" took 14.692162542s
	I0925 03:33:58.218201    1555 start.go:300] post-start starting for "addons-183000" (driver="qemu2")
	I0925 03:33:58.218213    1555 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0925 03:33:58.218288    1555 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0925 03:33:58.218298    1555 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/id_rsa Username:docker}
	I0925 03:33:58.255037    1555 ssh_runner.go:195] Run: cat /etc/os-release
	I0925 03:33:58.256454    1555 info.go:137] Remote host: Buildroot 2021.02.12
	I0925 03:33:58.256461    1555 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17297-1010/.minikube/addons for local assets ...
	I0925 03:33:58.256533    1555 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17297-1010/.minikube/files for local assets ...
	I0925 03:33:58.256562    1555 start.go:303] post-start completed in 38.354459ms
	I0925 03:33:58.256920    1555 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/config.json ...
	I0925 03:33:58.257077    1555 start.go:128] duration metric: createHost completed in 15.062148875s
	I0925 03:33:58.257104    1555 main.go:141] libmachine: Using SSH client type: native
	I0925 03:33:58.257337    1555 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100de8760] 0x100deaed0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0925 03:33:58.257341    1555 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0925 03:33:58.325173    1555 main.go:141] libmachine: SSH cmd err, output: <nil>: 1695638038.462407626
	
	I0925 03:33:58.325184    1555 fix.go:206] guest clock: 1695638038.462407626
	I0925 03:33:58.325188    1555 fix.go:219] Guest: 2023-09-25 03:33:58.462407626 -0700 PDT Remote: 2023-09-25 03:33:58.257082 -0700 PDT m=+15.162425626 (delta=205.325626ms)
	I0925 03:33:58.325199    1555 fix.go:190] guest clock delta is within tolerance: 205.325626ms
	I0925 03:33:58.325201    1555 start.go:83] releasing machines lock for "addons-183000", held for 15.130317917s
	I0925 03:33:58.325486    1555 ssh_runner.go:195] Run: cat /version.json
	I0925 03:33:58.325494    1555 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/id_rsa Username:docker}
	I0925 03:33:58.325516    1555 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0925 03:33:58.325555    1555 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/id_rsa Username:docker}
	I0925 03:33:58.361340    1555 ssh_runner.go:195] Run: systemctl --version
	I0925 03:33:58.402839    1555 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0925 03:33:58.404630    1555 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0925 03:33:58.404664    1555 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0925 03:33:58.409389    1555 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0925 03:33:58.409398    1555 start.go:469] detecting cgroup driver to use...
	I0925 03:33:58.409504    1555 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0925 03:33:58.414731    1555 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0925 03:33:58.417759    1555 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0925 03:33:58.420882    1555 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0925 03:33:58.420905    1555 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0925 03:33:58.424376    1555 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0925 03:33:58.427971    1555 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0925 03:33:58.431438    1555 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0925 03:33:58.434555    1555 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0925 03:33:58.437481    1555 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0925 03:33:58.440650    1555 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0925 03:33:58.444117    1555 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0925 03:33:58.446963    1555 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0925 03:33:58.506828    1555 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0925 03:33:58.515326    1555 start.go:469] detecting cgroup driver to use...
	I0925 03:33:58.515396    1555 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0925 03:33:58.520350    1555 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0925 03:33:58.525290    1555 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0925 03:33:58.532641    1555 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0925 03:33:58.537661    1555 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0925 03:33:58.542291    1555 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0925 03:33:58.583433    1555 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0925 03:33:58.588627    1555 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0925 03:33:58.594011    1555 ssh_runner.go:195] Run: which cri-dockerd
	I0925 03:33:58.595317    1555 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0925 03:33:58.597772    1555 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0925 03:33:58.602614    1555 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0925 03:33:58.687592    1555 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0925 03:33:58.763371    1555 docker.go:554] configuring docker to use "cgroupfs" as cgroup driver...
	I0925 03:33:58.763431    1555 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0925 03:33:58.768807    1555 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0925 03:33:58.850856    1555 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0925 03:34:00.021109    1555 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.170257708s)
	I0925 03:34:00.021184    1555 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0925 03:34:00.102397    1555 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0925 03:34:00.182389    1555 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0925 03:34:00.242288    1555 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0925 03:34:00.310048    1555 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0925 03:34:00.320927    1555 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0925 03:34:00.397773    1555 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0925 03:34:00.421934    1555 start.go:516] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0925 03:34:00.422022    1555 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0925 03:34:00.424107    1555 start.go:537] Will wait 60s for crictl version
	I0925 03:34:00.424134    1555 ssh_runner.go:195] Run: which crictl
	I0925 03:34:00.425400    1555 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0925 03:34:00.448268    1555 start.go:553] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.6
	RuntimeApiVersion:  v1
	I0925 03:34:00.448328    1555 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0925 03:34:00.458640    1555 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0925 03:34:00.474285    1555 out.go:204] * Preparing Kubernetes v1.28.2 on Docker 24.0.6 ...
	I0925 03:34:00.474362    1555 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0925 03:34:00.475766    1555 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0925 03:34:00.479918    1555 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0925 03:34:00.479959    1555 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0925 03:34:00.485137    1555 docker.go:664] Got preloaded images: 
	I0925 03:34:00.485144    1555 docker.go:670] registry.k8s.io/kube-apiserver:v1.28.2 wasn't preloaded
	I0925 03:34:00.485184    1555 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0925 03:34:00.488328    1555 ssh_runner.go:195] Run: which lz4
	I0925 03:34:00.489753    1555 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0925 03:34:00.490946    1555 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0925 03:34:00.490958    1555 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (356993689 bytes)
	I0925 03:34:01.821604    1555 docker.go:628] Took 1.331913 seconds to copy over tarball
	I0925 03:34:01.821663    1555 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0925 03:34:02.850635    1555 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.028977875s)
	I0925 03:34:02.850646    1555 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0925 03:34:02.866214    1555 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0925 03:34:02.869216    1555 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0925 03:34:02.874196    1555 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0925 03:34:02.955148    1555 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0925 03:34:05.167252    1555 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.212127209s)
	I0925 03:34:05.167356    1555 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0925 03:34:05.173293    1555 docker.go:664] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.2
	registry.k8s.io/kube-scheduler:v1.28.2
	registry.k8s.io/kube-proxy:v1.28.2
	registry.k8s.io/kube-controller-manager:v1.28.2
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0925 03:34:05.173304    1555 cache_images.go:84] Images are preloaded, skipping loading
	I0925 03:34:05.173372    1555 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0925 03:34:05.180961    1555 cni.go:84] Creating CNI manager for ""
	I0925 03:34:05.180975    1555 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0925 03:34:05.180995    1555 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0925 03:34:05.181006    1555 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.2 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-183000 NodeName:addons-183000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0925 03:34:05.181071    1555 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-183000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0925 03:34:05.181111    1555 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=addons-183000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:addons-183000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0925 03:34:05.181162    1555 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I0925 03:34:05.184441    1555 binaries.go:44] Found k8s binaries, skipping transfer
	I0925 03:34:05.184477    1555 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0925 03:34:05.187654    1555 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I0925 03:34:05.192980    1555 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0925 03:34:05.197983    1555 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2099 bytes)
	I0925 03:34:05.202799    1555 ssh_runner.go:195] Run: grep 192.168.105.2	control-plane.minikube.internal$ /etc/hosts
	I0925 03:34:05.204148    1555 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0925 03:34:05.208295    1555 certs.go:56] Setting up /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000 for IP: 192.168.105.2
	I0925 03:34:05.208303    1555 certs.go:190] acquiring lock for shared ca certs: {Name:mk095b03680bcdeba6c321a9f458c9fbafa67639 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 03:34:05.208463    1555 certs.go:204] generating minikubeCA CA: /Users/jenkins/minikube-integration/17297-1010/.minikube/ca.key
	I0925 03:34:05.279404    1555 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17297-1010/.minikube/ca.crt ...
	I0925 03:34:05.279413    1555 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17297-1010/.minikube/ca.crt: {Name:mk70f9fc8ba800117a8a8b4d751d3a98c619cb54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 03:34:05.279591    1555 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17297-1010/.minikube/ca.key ...
	I0925 03:34:05.279595    1555 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17297-1010/.minikube/ca.key: {Name:mkd44aa01a2f3e5b978643c9a3feb1028c2bb791 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 03:34:05.279712    1555 certs.go:204] generating proxyClientCA CA: /Users/jenkins/minikube-integration/17297-1010/.minikube/proxy-client-ca.key
	I0925 03:34:05.342350    1555 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17297-1010/.minikube/proxy-client-ca.crt ...
	I0925 03:34:05.342356    1555 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17297-1010/.minikube/proxy-client-ca.crt: {Name:mkc0af119bea050a868312bfe8f89d742604990c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 03:34:05.342558    1555 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17297-1010/.minikube/proxy-client-ca.key ...
	I0925 03:34:05.342563    1555 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17297-1010/.minikube/proxy-client-ca.key: {Name:mka9b8c6393173e2358c8b84eb9bff6ea6851f33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 03:34:05.342694    1555 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/client.key
	I0925 03:34:05.342700    1555 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/client.crt with IP's: []
	I0925 03:34:05.380999    1555 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/client.crt ...
	I0925 03:34:05.381013    1555 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/client.crt: {Name:mkec4b98dbbfb657baac4f5fae18fe43bd8b5970 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 03:34:05.381125    1555 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/client.key ...
	I0925 03:34:05.381130    1555 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/client.key: {Name:mk8be81ea1673fa1894559e8faa2fa2323674614 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 03:34:05.381227    1555 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/apiserver.key.96055969
	I0925 03:34:05.381235    1555 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/apiserver.crt.96055969 with IP's: [192.168.105.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0925 03:34:05.441721    1555 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/apiserver.crt.96055969 ...
	I0925 03:34:05.441725    1555 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/apiserver.crt.96055969: {Name:mkba38dc1a56241112b86d1503bca4f2588c1bf7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 03:34:05.441849    1555 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/apiserver.key.96055969 ...
	I0925 03:34:05.441852    1555 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/apiserver.key.96055969: {Name:mk41423e9550dcb3371da4467db52078d1bb4d78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 03:34:05.441956    1555 certs.go:337] copying /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/apiserver.crt.96055969 -> /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/apiserver.crt
	I0925 03:34:05.442053    1555 certs.go:341] copying /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/apiserver.key.96055969 -> /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/apiserver.key
	I0925 03:34:05.442146    1555 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/proxy-client.key
	I0925 03:34:05.442154    1555 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/proxy-client.crt with IP's: []
	I0925 03:34:05.578079    1555 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/proxy-client.crt ...
	I0925 03:34:05.578082    1555 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/proxy-client.crt: {Name:mkbd132fd7a0f2cb28d572f95bd43c9a1ef215f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 03:34:05.578216    1555 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/proxy-client.key ...
	I0925 03:34:05.578218    1555 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/proxy-client.key: {Name:mkf93f480df65e887c0e782806fe1d821d05370d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 03:34:05.578436    1555 certs.go:437] found cert: /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/Users/jenkins/minikube-integration/17297-1010/.minikube/certs/ca-key.pem (1675 bytes)
	I0925 03:34:05.578458    1555 certs.go:437] found cert: /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/Users/jenkins/minikube-integration/17297-1010/.minikube/certs/ca.pem (1082 bytes)
	I0925 03:34:05.578479    1555 certs.go:437] found cert: /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/Users/jenkins/minikube-integration/17297-1010/.minikube/certs/cert.pem (1123 bytes)
	I0925 03:34:05.578499    1555 certs.go:437] found cert: /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/Users/jenkins/minikube-integration/17297-1010/.minikube/certs/key.pem (1679 bytes)
	I0925 03:34:05.578876    1555 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0925 03:34:05.587435    1555 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0925 03:34:05.594545    1555 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0925 03:34:05.601433    1555 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0925 03:34:05.608504    1555 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1010/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0925 03:34:05.616247    1555 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1010/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0925 03:34:05.623555    1555 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1010/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0925 03:34:05.630877    1555 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1010/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0925 03:34:05.637827    1555 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1010/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0925 03:34:05.644421    1555 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0925 03:34:05.650432    1555 ssh_runner.go:195] Run: openssl version
	I0925 03:34:05.652383    1555 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0925 03:34:05.655860    1555 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0925 03:34:05.657450    1555 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 25 10:34 /usr/share/ca-certificates/minikubeCA.pem
	I0925 03:34:05.657472    1555 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0925 03:34:05.659354    1555 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0925 03:34:05.662355    1555 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0925 03:34:05.663775    1555 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0925 03:34:05.663811    1555 kubeadm.go:404] StartCluster: {Name:addons-183000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.2 ClusterName:addons-183000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0925 03:34:05.663875    1555 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0925 03:34:05.669363    1555 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0925 03:34:05.672641    1555 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0925 03:34:05.675788    1555 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0925 03:34:05.678955    1555 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0925 03:34:05.678977    1555 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0925 03:34:05.700129    1555 kubeadm.go:322] [init] Using Kubernetes version: v1.28.2
	I0925 03:34:05.700165    1555 kubeadm.go:322] [preflight] Running pre-flight checks
	I0925 03:34:05.762507    1555 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0925 03:34:05.762580    1555 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0925 03:34:05.762631    1555 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0925 03:34:05.856523    1555 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0925 03:34:05.862696    1555 out.go:204]   - Generating certificates and keys ...
	I0925 03:34:05.862744    1555 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0925 03:34:05.862781    1555 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0925 03:34:05.954799    1555 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0925 03:34:06.088347    1555 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0925 03:34:06.179074    1555 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0925 03:34:06.367263    1555 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0925 03:34:06.441263    1555 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0925 03:34:06.441326    1555 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-183000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0925 03:34:06.679555    1555 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0925 03:34:06.679622    1555 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-183000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0925 03:34:06.780717    1555 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0925 03:34:06.934557    1555 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0925 03:34:07.004571    1555 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0925 03:34:07.004599    1555 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0925 03:34:07.096444    1555 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0925 03:34:07.197087    1555 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0925 03:34:07.295019    1555 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0925 03:34:07.459088    1555 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0925 03:34:07.459841    1555 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0925 03:34:07.461016    1555 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0925 03:34:07.464311    1555 out.go:204]   - Booting up control plane ...
	I0925 03:34:07.464429    1555 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0925 03:34:07.464523    1555 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0925 03:34:07.464562    1555 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0925 03:34:07.468573    1555 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0925 03:34:07.468914    1555 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0925 03:34:07.468980    1555 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0925 03:34:07.551081    1555 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0925 03:34:11.552205    1555 kubeadm.go:322] [apiclient] All control plane components are healthy after 4.001307 seconds
	I0925 03:34:11.552277    1555 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0925 03:34:11.558090    1555 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0925 03:34:12.066492    1555 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0925 03:34:12.066604    1555 kubeadm.go:322] [mark-control-plane] Marking the node addons-183000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0925 03:34:12.571455    1555 kubeadm.go:322] [bootstrap-token] Using token: dcud0i.8u8422zl7jahtpxe
	I0925 03:34:12.577836    1555 out.go:204]   - Configuring RBAC rules ...
	I0925 03:34:12.577916    1555 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0925 03:34:12.580042    1555 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0925 03:34:12.583046    1555 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0925 03:34:12.584193    1555 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0925 03:34:12.585457    1555 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0925 03:34:12.586636    1555 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0925 03:34:12.592832    1555 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0925 03:34:12.757427    1555 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0925 03:34:12.982058    1555 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0925 03:34:12.982629    1555 kubeadm.go:322] 
	I0925 03:34:12.982664    1555 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0925 03:34:12.982667    1555 kubeadm.go:322] 
	I0925 03:34:12.982715    1555 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0925 03:34:12.982721    1555 kubeadm.go:322] 
	I0925 03:34:12.982735    1555 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0925 03:34:12.982762    1555 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0925 03:34:12.982824    1555 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0925 03:34:12.982828    1555 kubeadm.go:322] 
	I0925 03:34:12.982852    1555 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0925 03:34:12.982856    1555 kubeadm.go:322] 
	I0925 03:34:12.982895    1555 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0925 03:34:12.982898    1555 kubeadm.go:322] 
	I0925 03:34:12.982927    1555 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0925 03:34:12.982998    1555 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0925 03:34:12.983041    1555 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0925 03:34:12.983046    1555 kubeadm.go:322] 
	I0925 03:34:12.983087    1555 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0925 03:34:12.983123    1555 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0925 03:34:12.983125    1555 kubeadm.go:322] 
	I0925 03:34:12.983172    1555 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token dcud0i.8u8422zl7jahtpxe \
	I0925 03:34:12.983225    1555 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:3fc5fb926713648f8638ba10da0d4f45584d32929bcc07af5ada491c000ad47e \
	I0925 03:34:12.983240    1555 kubeadm.go:322] 	--control-plane 
	I0925 03:34:12.983242    1555 kubeadm.go:322] 
	I0925 03:34:12.983281    1555 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0925 03:34:12.983285    1555 kubeadm.go:322] 
	I0925 03:34:12.983328    1555 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token dcud0i.8u8422zl7jahtpxe \
	I0925 03:34:12.983387    1555 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:3fc5fb926713648f8638ba10da0d4f45584d32929bcc07af5ada491c000ad47e 
	I0925 03:34:12.983463    1555 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0925 03:34:12.983472    1555 cni.go:84] Creating CNI manager for ""
	I0925 03:34:12.983479    1555 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0925 03:34:12.992098    1555 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0925 03:34:12.995235    1555 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0925 03:34:12.999700    1555 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0925 03:34:13.004656    1555 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0925 03:34:13.004755    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=1bf6c3d5317028f348e55ea19d261973a6487d3c minikube.k8s.io/name=addons-183000 minikube.k8s.io/updated_at=2023_09_25T03_34_13_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:13.004757    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:13.008164    1555 ops.go:34] apiserver oom_adj: -16
	I0925 03:34:13.063625    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:13.095139    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:13.629666    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:14.129649    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:14.629662    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:15.129655    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:15.629628    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:16.129723    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:16.629660    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:17.129683    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:17.629643    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:18.129619    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:18.629638    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:19.129594    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:19.629589    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:20.129625    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:20.629540    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:21.129598    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:21.629573    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:22.129550    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:22.629493    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:23.129517    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:23.629511    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:24.129464    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:24.629448    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:25.129565    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:25.629529    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:26.129496    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:26.629436    1555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 03:34:26.667079    1555 kubeadm.go:1081] duration metric: took 13.662618083s to wait for elevateKubeSystemPrivileges.
	I0925 03:34:26.667097    1555 kubeadm.go:406] StartCluster complete in 21.003673917s
	I0925 03:34:26.667106    1555 settings.go:142] acquiring lock: {Name:mkb5a0822179f07ef9369c44aa9b64eb9ef74eed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 03:34:26.667266    1555 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/17297-1010/kubeconfig
	I0925 03:34:26.667431    1555 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17297-1010/kubeconfig: {Name:mkaa9d09ca2bf27c1a43efc9acf938adcc68343d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 03:34:26.667677    1555 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0925 03:34:26.667722    1555 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:true]
	I0925 03:34:26.667779    1555 addons.go:69] Setting volumesnapshots=true in profile "addons-183000"
	I0925 03:34:26.667782    1555 addons.go:69] Setting cloud-spanner=true in profile "addons-183000"
	I0925 03:34:26.667785    1555 addons.go:231] Setting addon volumesnapshots=true in "addons-183000"
	I0925 03:34:26.667789    1555 addons.go:231] Setting addon cloud-spanner=true in "addons-183000"
	I0925 03:34:26.667790    1555 addons.go:69] Setting default-storageclass=true in profile "addons-183000"
	I0925 03:34:26.667799    1555 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-183000"
	I0925 03:34:26.667820    1555 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-183000"
	I0925 03:34:26.667848    1555 host.go:66] Checking if "addons-183000" exists ...
	I0925 03:34:26.667850    1555 addons.go:69] Setting registry=true in profile "addons-183000"
	I0925 03:34:26.667858    1555 addons.go:231] Setting addon registry=true in "addons-183000"
	I0925 03:34:26.667849    1555 addons.go:69] Setting metrics-server=true in profile "addons-183000"
	I0925 03:34:26.667873    1555 addons.go:231] Setting addon metrics-server=true in "addons-183000"
	I0925 03:34:26.667880    1555 host.go:66] Checking if "addons-183000" exists ...
	I0925 03:34:26.667881    1555 addons.go:69] Setting gcp-auth=true in profile "addons-183000"
	I0925 03:34:26.667902    1555 mustload.go:65] Loading cluster: addons-183000
	I0925 03:34:26.667915    1555 host.go:66] Checking if "addons-183000" exists ...
	I0925 03:34:26.667948    1555 host.go:66] Checking if "addons-183000" exists ...
	I0925 03:34:26.667977    1555 config.go:182] Loaded profile config "addons-183000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0925 03:34:26.668033    1555 addons.go:69] Setting ingress-dns=true in profile "addons-183000"
	I0925 03:34:26.668035    1555 addons.go:69] Setting inspektor-gadget=true in profile "addons-183000"
	I0925 03:34:26.668042    1555 addons.go:69] Setting storage-provisioner=true in profile "addons-183000"
	I0925 03:34:26.668047    1555 addons.go:231] Setting addon storage-provisioner=true in "addons-183000"
	I0925 03:34:26.668049    1555 addons.go:231] Setting addon inspektor-gadget=true in "addons-183000"
	I0925 03:34:26.668059    1555 host.go:66] Checking if "addons-183000" exists ...
	I0925 03:34:26.668076    1555 host.go:66] Checking if "addons-183000" exists ...
	W0925 03:34:26.668189    1555 host.go:54] host status for "addons-183000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/monitor: connect: connection refused
	W0925 03:34:26.668197    1555 addons.go:277] "addons-183000" is not running, setting volumesnapshots=true and skipping enablement (err=<nil>)
	I0925 03:34:26.667780    1555 addons.go:69] Setting ingress=true in profile "addons-183000"
	I0925 03:34:26.668202    1555 addons.go:231] Setting addon ingress=true in "addons-183000"
	I0925 03:34:26.668215    1555 host.go:66] Checking if "addons-183000" exists ...
	W0925 03:34:26.668271    1555 host.go:54] host status for "addons-183000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/monitor: connect: connection refused
	W0925 03:34:26.668277    1555 addons.go:277] "addons-183000" is not running, setting storage-provisioner=true and skipping enablement (err=<nil>)
	I0925 03:34:26.668038    1555 addons.go:231] Setting addon ingress-dns=true in "addons-183000"
	I0925 03:34:26.668289    1555 host.go:66] Checking if "addons-183000" exists ...
	I0925 03:34:26.667873    1555 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-183000"
	I0925 03:34:26.668351    1555 host.go:66] Checking if "addons-183000" exists ...
	W0925 03:34:26.668420    1555 host.go:54] host status for "addons-183000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/monitor: connect: connection refused
	W0925 03:34:26.668426    1555 addons.go:277] "addons-183000" is not running, setting ingress=true and skipping enablement (err=<nil>)
	I0925 03:34:26.668428    1555 addons.go:467] Verifying addon ingress=true in "addons-183000"
	I0925 03:34:26.671815    1555 out.go:177] * Verifying ingress addon...
	I0925 03:34:26.668077    1555 config.go:182] Loaded profile config "addons-183000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	W0925 03:34:26.668443    1555 host.go:54] host status for "addons-183000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/monitor: connect: connection refused
	W0925 03:34:26.668492    1555 host.go:54] host status for "addons-183000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/monitor: connect: connection refused
	W0925 03:34:26.668560    1555 host.go:54] host status for "addons-183000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/monitor: connect: connection refused
	W0925 03:34:26.668562    1555 host.go:54] host status for "addons-183000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/monitor: connect: connection refused
	W0925 03:34:26.668565    1555 host.go:54] host status for "addons-183000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/monitor: connect: connection refused
	I0925 03:34:26.674660    1555 addons.go:231] Setting addon default-storageclass=true in "addons-183000"
	W0925 03:34:26.679882    1555 addons.go:277] "addons-183000" is not running, setting ingress-dns=true and skipping enablement (err=<nil>)
	W0925 03:34:26.679903    1555 addons.go:277] "addons-183000" is not running, setting csi-hostpath-driver=true and skipping enablement (err=<nil>)
	W0925 03:34:26.679909    1555 addons.go:277] "addons-183000" is not running, setting cloud-spanner=true and skipping enablement (err=<nil>)
	W0925 03:34:26.679910    1555 addons.go:277] "addons-183000" is not running, setting metrics-server=true and skipping enablement (err=<nil>)
	W0925 03:34:26.679914    1555 addons.go:277] "addons-183000" is not running, setting registry=true and skipping enablement (err=<nil>)
	I0925 03:34:26.680408    1555 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0925 03:34:26.680587    1555 host.go:66] Checking if "addons-183000" exists ...
	I0925 03:34:26.685884    1555 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-183000"
	I0925 03:34:26.691851    1555 out.go:177] * Verifying csi-hostpath-driver addon...
	I0925 03:34:26.685950    1555 addons.go:467] Verifying addon metrics-server=true in "addons-183000"
	I0925 03:34:26.685956    1555 addons.go:467] Verifying addon registry=true in "addons-183000"
	I0925 03:34:26.685976    1555 host.go:66] Checking if "addons-183000" exists ...
	I0925 03:34:26.685980    1555 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.20.0
	I0925 03:34:26.693878    1555 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0925 03:34:26.696802    1555 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-183000" context rescaled to 1 replicas
	I0925 03:34:26.698859    1555 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 03:34:26.700109    1555 out.go:177] * Verifying Kubernetes components...
	I0925 03:34:26.699453    1555 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0925 03:34:26.699742    1555 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0925 03:34:26.709918    1555 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0925 03:34:26.713912    1555 out.go:177] * Verifying registry addon...
	I0925 03:34:26.717867    1555 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0925 03:34:26.720802    1555 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/id_rsa Username:docker}
	I0925 03:34:26.717891    1555 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0925 03:34:26.720819    1555 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0925 03:34:26.720825    1555 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/id_rsa Username:docker}
	I0925 03:34:26.721266    1555 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0925 03:34:26.726699    1555 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=registry
	I0925 03:34:26.728776    1555 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0925 03:34:26.751434    1555 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0925 03:34:26.751798    1555 node_ready.go:35] waiting up to 6m0s for node "addons-183000" to be "Ready" ...
	I0925 03:34:26.753298    1555 node_ready.go:49] node "addons-183000" has status "Ready":"True"
	I0925 03:34:26.753320    1555 node_ready.go:38] duration metric: took 1.500542ms waiting for node "addons-183000" to be "Ready" ...
	I0925 03:34:26.753326    1555 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0925 03:34:26.756603    1555 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-nj9v5" in "kube-system" namespace to be "Ready" ...
	I0925 03:34:26.894346    1555 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0925 03:34:26.894357    1555 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0925 03:34:26.894362    1555 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0925 03:34:26.913613    1555 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I0925 03:34:26.913623    1555 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0925 03:34:26.955544    1555 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0925 03:34:26.955558    1555 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0925 03:34:26.966254    1555 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0925 03:34:26.966263    1555 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0925 03:34:26.970978    1555 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0925 03:34:26.970984    1555 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0925 03:34:26.980045    1555 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I0925 03:34:26.980056    1555 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0925 03:34:27.011877    1555 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0925 03:34:27.011886    1555 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7741 bytes)
	I0925 03:34:27.035496    1555 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0925 03:34:27.284243    1555 start.go:923] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I0925 03:34:28.770683    1555 pod_ready.go:102] pod "coredns-5dd5756b68-nj9v5" in "kube-system" namespace has status "Ready":"False"
	I0925 03:34:30.771066    1555 pod_ready.go:102] pod "coredns-5dd5756b68-nj9v5" in "kube-system" namespace has status "Ready":"False"
	I0925 03:34:33.271406    1555 pod_ready.go:102] pod "coredns-5dd5756b68-nj9v5" in "kube-system" namespace has status "Ready":"False"
	I0925 03:34:33.290034    1555 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0925 03:34:33.290047    1555 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/id_rsa Username:docker}
	I0925 03:34:33.333376    1555 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0925 03:34:33.340520    1555 addons.go:231] Setting addon gcp-auth=true in "addons-183000"
	I0925 03:34:33.340540    1555 host.go:66] Checking if "addons-183000" exists ...
	I0925 03:34:33.341291    1555 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0925 03:34:33.341299    1555 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/addons-183000/id_rsa Username:docker}
	I0925 03:34:33.385047    1555 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0925 03:34:33.390017    1555 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0925 03:34:33.393078    1555 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0925 03:34:33.393083    1555 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0925 03:34:33.401443    1555 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0925 03:34:33.401449    1555 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0925 03:34:33.408814    1555 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0925 03:34:33.408821    1555 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5412 bytes)
	I0925 03:34:33.415868    1555 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0925 03:34:33.956480    1555 addons.go:467] Verifying addon gcp-auth=true in "addons-183000"
	I0925 03:34:33.962940    1555 out.go:177] * Verifying gcp-auth addon...
	I0925 03:34:33.970267    1555 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0925 03:34:33.972814    1555 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0925 03:34:33.972821    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:33.975859    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:34.479146    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:34.978976    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:35.477962    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:35.770777    1555 pod_ready.go:102] pod "coredns-5dd5756b68-nj9v5" in "kube-system" namespace has status "Ready":"False"
	I0925 03:34:35.978841    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:36.478564    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:36.978738    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:37.478896    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:37.978838    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:38.273778    1555 pod_ready.go:102] pod "coredns-5dd5756b68-nj9v5" in "kube-system" namespace has status "Ready":"False"
	I0925 03:34:38.478811    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:38.770881    1555 pod_ready.go:92] pod "coredns-5dd5756b68-nj9v5" in "kube-system" namespace has status "Ready":"True"
	I0925 03:34:38.770889    1555 pod_ready.go:81] duration metric: took 12.014493833s waiting for pod "coredns-5dd5756b68-nj9v5" in "kube-system" namespace to be "Ready" ...
	I0925 03:34:38.770893    1555 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-183000" in "kube-system" namespace to be "Ready" ...
	I0925 03:34:38.773593    1555 pod_ready.go:92] pod "etcd-addons-183000" in "kube-system" namespace has status "Ready":"True"
	I0925 03:34:38.773599    1555 pod_ready.go:81] duration metric: took 2.702459ms waiting for pod "etcd-addons-183000" in "kube-system" namespace to be "Ready" ...
	I0925 03:34:38.773602    1555 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-183000" in "kube-system" namespace to be "Ready" ...
	I0925 03:34:38.775799    1555 pod_ready.go:92] pod "kube-apiserver-addons-183000" in "kube-system" namespace has status "Ready":"True"
	I0925 03:34:38.775804    1555 pod_ready.go:81] duration metric: took 2.198875ms waiting for pod "kube-apiserver-addons-183000" in "kube-system" namespace to be "Ready" ...
	I0925 03:34:38.775808    1555 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-183000" in "kube-system" namespace to be "Ready" ...
	I0925 03:34:38.777922    1555 pod_ready.go:92] pod "kube-controller-manager-addons-183000" in "kube-system" namespace has status "Ready":"True"
	I0925 03:34:38.777929    1555 pod_ready.go:81] duration metric: took 2.118625ms waiting for pod "kube-controller-manager-addons-183000" in "kube-system" namespace to be "Ready" ...
	I0925 03:34:38.777933    1555 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7t7bh" in "kube-system" namespace to be "Ready" ...
	I0925 03:34:38.780129    1555 pod_ready.go:92] pod "kube-proxy-7t7bh" in "kube-system" namespace has status "Ready":"True"
	I0925 03:34:38.780136    1555 pod_ready.go:81] duration metric: took 2.199875ms waiting for pod "kube-proxy-7t7bh" in "kube-system" namespace to be "Ready" ...
	I0925 03:34:38.780139    1555 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-183000" in "kube-system" namespace to be "Ready" ...
	I0925 03:34:38.977389    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:39.173086    1555 pod_ready.go:92] pod "kube-scheduler-addons-183000" in "kube-system" namespace has status "Ready":"True"
	I0925 03:34:39.173096    1555 pod_ready.go:81] duration metric: took 392.960166ms waiting for pod "kube-scheduler-addons-183000" in "kube-system" namespace to be "Ready" ...
	I0925 03:34:39.173100    1555 pod_ready.go:38] duration metric: took 12.419997458s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0925 03:34:39.173111    1555 api_server.go:52] waiting for apiserver process to appear ...
	I0925 03:34:39.173181    1555 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0925 03:34:39.178068    1555 api_server.go:72] duration metric: took 12.479424625s to wait for apiserver process to appear ...
	I0925 03:34:39.178075    1555 api_server.go:88] waiting for apiserver healthz status ...
	I0925 03:34:39.178081    1555 api_server.go:253] Checking apiserver healthz at https://192.168.105.2:8443/healthz ...
	I0925 03:34:39.182471    1555 api_server.go:279] https://192.168.105.2:8443/healthz returned 200:
	ok
	I0925 03:34:39.183204    1555 api_server.go:141] control plane version: v1.28.2
	I0925 03:34:39.183210    1555 api_server.go:131] duration metric: took 5.132042ms to wait for apiserver health ...
	I0925 03:34:39.183213    1555 system_pods.go:43] waiting for kube-system pods to appear ...
	I0925 03:34:39.372354    1555 system_pods.go:59] 6 kube-system pods found
	I0925 03:34:39.372365    1555 system_pods.go:61] "coredns-5dd5756b68-nj9v5" [b1bb0e62-0339-479f-9572-1e07ab015a1d] Running
	I0925 03:34:39.372368    1555 system_pods.go:61] "etcd-addons-183000" [98901ac9-8165-4fad-b6a6-6c757da8e783] Running
	I0925 03:34:39.372371    1555 system_pods.go:61] "kube-apiserver-addons-183000" [b3899bc1-2055-47fb-aded-8cc3e5ca8b22] Running
	I0925 03:34:39.372373    1555 system_pods.go:61] "kube-controller-manager-addons-183000" [12803b97-0e90-4869-a114-2dce351af701] Running
	I0925 03:34:39.372376    1555 system_pods.go:61] "kube-proxy-7t7bh" [b51c70db-a512-4aae-af91-8b45e6ce9f89] Running
	I0925 03:34:39.372378    1555 system_pods.go:61] "kube-scheduler-addons-183000" [543428f6-b6ce-448c-9d3e-48c775396c75] Running
	I0925 03:34:39.372382    1555 system_pods.go:74] duration metric: took 189.166917ms to wait for pod list to return data ...
	I0925 03:34:39.372386    1555 default_sa.go:34] waiting for default service account to be created ...
	I0925 03:34:39.478483    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:39.569942    1555 default_sa.go:45] found service account: "default"
	I0925 03:34:39.569952    1555 default_sa.go:55] duration metric: took 197.566292ms for default service account to be created ...
	I0925 03:34:39.569955    1555 system_pods.go:116] waiting for k8s-apps to be running ...
	I0925 03:34:39.771555    1555 system_pods.go:86] 6 kube-system pods found
	I0925 03:34:39.771566    1555 system_pods.go:89] "coredns-5dd5756b68-nj9v5" [b1bb0e62-0339-479f-9572-1e07ab015a1d] Running
	I0925 03:34:39.771569    1555 system_pods.go:89] "etcd-addons-183000" [98901ac9-8165-4fad-b6a6-6c757da8e783] Running
	I0925 03:34:39.771571    1555 system_pods.go:89] "kube-apiserver-addons-183000" [b3899bc1-2055-47fb-aded-8cc3e5ca8b22] Running
	I0925 03:34:39.771573    1555 system_pods.go:89] "kube-controller-manager-addons-183000" [12803b97-0e90-4869-a114-2dce351af701] Running
	I0925 03:34:39.771576    1555 system_pods.go:89] "kube-proxy-7t7bh" [b51c70db-a512-4aae-af91-8b45e6ce9f89] Running
	I0925 03:34:39.771579    1555 system_pods.go:89] "kube-scheduler-addons-183000" [543428f6-b6ce-448c-9d3e-48c775396c75] Running
	I0925 03:34:39.771582    1555 system_pods.go:126] duration metric: took 201.627792ms to wait for k8s-apps to be running ...
	I0925 03:34:39.771585    1555 system_svc.go:44] waiting for kubelet service to be running ....
	I0925 03:34:39.771649    1555 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0925 03:34:39.777059    1555 system_svc.go:56] duration metric: took 5.471834ms WaitForService to wait for kubelet.
	I0925 03:34:39.777072    1555 kubeadm.go:581] duration metric: took 13.078440792s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0925 03:34:39.777081    1555 node_conditions.go:102] verifying NodePressure condition ...
	I0925 03:34:39.970496    1555 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0925 03:34:39.970507    1555 node_conditions.go:123] node cpu capacity is 2
	I0925 03:34:39.970512    1555 node_conditions.go:105] duration metric: took 193.43225ms to run NodePressure ...
	I0925 03:34:39.970518    1555 start.go:228] waiting for startup goroutines ...
	I0925 03:34:39.977869    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:40.478718    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:40.978494    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:41.478330    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:41.978723    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:42.478484    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:42.978499    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:43.478310    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:43.978560    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:44.478626    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:44.978747    1555 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 03:34:45.478652    1555 kapi.go:107] duration metric: took 11.508592542s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0925 03:34:45.482917    1555 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-183000 cluster.
	I0925 03:34:45.486908    1555 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0925 03:34:45.489839    1555 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0925 03:40:26.681420    1555 kapi.go:107] duration metric: took 6m0.007630792s to wait for app.kubernetes.io/name=ingress-nginx ...
	W0925 03:40:26.681519    1555 out.go:239] ! Enabling 'ingress' returned an error: running callbacks: [waiting for app.kubernetes.io/name=ingress-nginx pods: context deadline exceeded]
	I0925 03:40:26.713271    1555 kapi.go:107] duration metric: took 6m0.020443166s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	W0925 03:40:26.713301    1555 out.go:239] ! Enabling 'csi-hostpath-driver' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=csi-hostpath-driver pods: context deadline exceeded]
	I0925 03:40:26.715027    1555 kapi.go:107] duration metric: took 6m0.000386167s to wait for kubernetes.io/minikube-addons=registry ...
	W0925 03:40:26.715058    1555 out.go:239] ! Enabling 'registry' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=registry pods: context deadline exceeded]
	I0925 03:40:26.720408    1555 out.go:177] * Enabled addons: volumesnapshots, storage-provisioner, cloud-spanner, ingress-dns, metrics-server, default-storageclass, inspektor-gadget, gcp-auth
	I0925 03:40:26.729284    1555 addons.go:502] enable addons completed in 6m0.068199458s: enabled=[volumesnapshots storage-provisioner cloud-spanner ingress-dns metrics-server default-storageclass inspektor-gadget gcp-auth]
	I0925 03:40:26.729295    1555 start.go:233] waiting for cluster config update ...
	I0925 03:40:26.729300    1555 start.go:242] writing updated cluster config ...
	I0925 03:40:26.729761    1555 ssh_runner.go:195] Run: rm -f paused
	I0925 03:40:26.760421    1555 start.go:600] kubectl: 1.27.2, cluster: 1.28.2 (minor skew: 1)
	I0925 03:40:26.764251    1555 out.go:177] * Done! kubectl is now configured to use "addons-183000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Mon 2023-09-25 10:33:55 UTC, ends at Mon 2023-09-25 10:54:04 UTC. --
	Sep 25 10:34:42 addons-183000 dockerd[1111]: time="2023-09-25T10:34:42.358246450Z" level=info msg="shim disconnected" id=d009b921a4cc83c6746a6427d33a20b5315cc03832a52dae5f1cc5bda62fc19b namespace=moby
	Sep 25 10:34:42 addons-183000 dockerd[1111]: time="2023-09-25T10:34:42.358270381Z" level=warning msg="cleaning up after shim disconnected" id=d009b921a4cc83c6746a6427d33a20b5315cc03832a52dae5f1cc5bda62fc19b namespace=moby
	Sep 25 10:34:42 addons-183000 dockerd[1111]: time="2023-09-25T10:34:42.358274585Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 25 10:34:42 addons-183000 dockerd[1111]: time="2023-09-25T10:34:42.372096465Z" level=warning msg="cleanup warnings time=\"2023-09-25T10:34:42Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Sep 25 10:34:42 addons-183000 dockerd[1105]: time="2023-09-25T10:34:42.400036385Z" level=warning msg="reference for unknown type: " digest="sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf" remote="gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf"
	Sep 25 10:34:42 addons-183000 dockerd[1105]: time="2023-09-25T10:34:42.404130643Z" level=info msg="ignoring event" container=d09446869187232df599a79609b2cc6878507cb6c4070aec9a79632485a47117 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 25 10:34:42 addons-183000 dockerd[1111]: time="2023-09-25T10:34:42.404287046Z" level=info msg="shim disconnected" id=d09446869187232df599a79609b2cc6878507cb6c4070aec9a79632485a47117 namespace=moby
	Sep 25 10:34:42 addons-183000 dockerd[1111]: time="2023-09-25T10:34:42.404320674Z" level=warning msg="cleaning up after shim disconnected" id=d09446869187232df599a79609b2cc6878507cb6c4070aec9a79632485a47117 namespace=moby
	Sep 25 10:34:42 addons-183000 dockerd[1111]: time="2023-09-25T10:34:42.404325086Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 25 10:34:44 addons-183000 cri-dockerd[998]: time="2023-09-25T10:34:44Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf: Status: Downloaded newer image for gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf"
	Sep 25 10:34:44 addons-183000 dockerd[1111]: time="2023-09-25T10:34:44.321936694Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 25 10:34:44 addons-183000 dockerd[1111]: time="2023-09-25T10:34:44.321971829Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 25 10:34:44 addons-183000 dockerd[1111]: time="2023-09-25T10:34:44.321982528Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 25 10:34:44 addons-183000 dockerd[1111]: time="2023-09-25T10:34:44.321989314Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 25 10:52:28 addons-183000 dockerd[1111]: time="2023-09-25T10:52:28.470595101Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 25 10:52:28 addons-183000 dockerd[1111]: time="2023-09-25T10:52:28.470647350Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 25 10:52:28 addons-183000 dockerd[1111]: time="2023-09-25T10:52:28.470663058Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 25 10:52:28 addons-183000 dockerd[1111]: time="2023-09-25T10:52:28.470673850Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 25 10:52:28 addons-183000 cri-dockerd[998]: time="2023-09-25T10:52:28Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/df99a16ef61333f49304447de1f31c9677e9243b43dae14dfba57e8a2aeeb1be/resolv.conf as [nameserver 10.96.0.10 search headlamp.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Sep 25 10:52:28 addons-183000 dockerd[1105]: time="2023-09-25T10:52:28.813334559Z" level=warning msg="reference for unknown type: " digest="sha256:bb15916c96306cd14f1c9c09c639d01d1d1fb854fd770bf99f3e7a9deb584753" remote="ghcr.io/headlamp-k8s/headlamp@sha256:bb15916c96306cd14f1c9c09c639d01d1d1fb854fd770bf99f3e7a9deb584753"
	Sep 25 10:52:33 addons-183000 cri-dockerd[998]: time="2023-09-25T10:52:33Z" level=info msg="Stop pulling image ghcr.io/headlamp-k8s/headlamp:v0.19.1@sha256:bb15916c96306cd14f1c9c09c639d01d1d1fb854fd770bf99f3e7a9deb584753: Status: Downloaded newer image for ghcr.io/headlamp-k8s/headlamp@sha256:bb15916c96306cd14f1c9c09c639d01d1d1fb854fd770bf99f3e7a9deb584753"
	Sep 25 10:52:34 addons-183000 dockerd[1111]: time="2023-09-25T10:52:34.018791826Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 25 10:52:34 addons-183000 dockerd[1111]: time="2023-09-25T10:52:34.018844700Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 25 10:52:34 addons-183000 dockerd[1111]: time="2023-09-25T10:52:34.018856825Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 25 10:52:34 addons-183000 dockerd[1111]: time="2023-09-25T10:52:34.018863408Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	d5793dcd01c69       ghcr.io/headlamp-k8s/headlamp@sha256:bb15916c96306cd14f1c9c09c639d01d1d1fb854fd770bf99f3e7a9deb584753               About a minute ago   Running             headlamp                  0                   df99a16ef6133       headlamp-58b88cff49-kdgv2
	f0ceeef2fd99f       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf        19 minutes ago       Running             gcp-auth                  0                   217fc96b3ae84       gcp-auth-d4c87556c-fgkgk
	3214d7d3645b3       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:01b7311f9512411ef6530e09dbdd3aeaea0abc4101227dbead4d44c36b255ca7   19 minutes ago       Running             gadget                    0                   1f38ec635c03d       gadget-dmqnx
	09ae8580d310e       97e04611ad434                                                                                                       19 minutes ago       Running             coredns                   0                   9802832060d13       coredns-5dd5756b68-nj9v5
	fff72387d957b       7da62c127fc0f                                                                                                       19 minutes ago       Running             kube-proxy                0                   2514b88f9fbec       kube-proxy-7t7bh
	e24563a552742       89d57b83c1786                                                                                                       19 minutes ago       Running             kube-controller-manager   0                   7170972f2383c       kube-controller-manager-addons-183000
	e38f0c6d58f79       30bb499447fe1                                                                                                       19 minutes ago       Running             kube-apiserver            0                   e3ec8dad501d8       kube-apiserver-addons-183000
	202a7fdac8250       9cdd6470f48c8                                                                                                       19 minutes ago       Running             etcd                      0                   f07db97eda3c5       etcd-addons-183000
	5a87dfcd0e1a4       64fc40cee3716                                                                                                       19 minutes ago       Running             kube-scheduler            0                   88f62df9ef878       kube-scheduler-addons-183000
	
	* 
	* ==> coredns [09ae8580d310] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:53855 - 12762 "HINFO IN 6175233926506353361.1980247959579836404. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.004134462s
	[INFO] 10.244.0.5:53045 - 37584 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000106198s
	[INFO] 10.244.0.5:58309 - 60928 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000170558s
	[INFO] 10.244.0.5:51843 - 23622 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000213104s
	[INFO] 10.244.0.5:42760 - 58990 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000042504s
	[INFO] 10.244.0.5:51340 - 46119 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00004929s
	[INFO] 10.244.0.5:39848 - 8379 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000023105s
	[INFO] 10.244.0.5:32887 - 31577 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001136668s
	[INFO] 10.244.0.5:49269 - 43084 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 458 0.001085546s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-183000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-183000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1bf6c3d5317028f348e55ea19d261973a6487d3c
	                    minikube.k8s.io/name=addons-183000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_25T03_34_13_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 25 Sep 2023 10:34:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-183000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 25 Sep 2023 10:53:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 25 Sep 2023 10:52:43 +0000   Mon, 25 Sep 2023 10:34:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 25 Sep 2023 10:52:43 +0000   Mon, 25 Sep 2023 10:34:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 25 Sep 2023 10:52:43 +0000   Mon, 25 Sep 2023 10:34:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 25 Sep 2023 10:52:43 +0000   Mon, 25 Sep 2023 10:34:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.2
	  Hostname:    addons-183000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	System Info:
	  Machine ID:                 3ec93b0c295a46b69f667e92919bae36
	  System UUID:                3ec93b0c295a46b69f667e92919bae36
	  Boot ID:                    e140f335-14d6-4d36-af6f-4c16a72ee860
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.6
	  Kubelet Version:            v1.28.2
	  Kube-Proxy Version:         v1.28.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  gadget                      gadget-dmqnx                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  gcp-auth                    gcp-auth-d4c87556c-fgkgk                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  headlamp                    headlamp-58b88cff49-kdgv2                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         96s
	  kube-system                 coredns-5dd5756b68-nj9v5                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     19m
	  kube-system                 etcd-addons-183000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         19m
	  kube-system                 kube-apiserver-addons-183000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-controller-manager-addons-183000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-proxy-7t7bh                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-scheduler-addons-183000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 19m   kube-proxy       
	  Normal  Starting                 19m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  19m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  19m   kubelet          Node addons-183000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m   kubelet          Node addons-183000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m   kubelet          Node addons-183000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                19m   kubelet          Node addons-183000 status is now: NodeReady
	  Normal  RegisteredNode           19m   node-controller  Node addons-183000 event: Registered Node addons-183000 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.641440] EINJ: EINJ table not found.
	[  +0.489201] systemd-fstab-generator[117]: Ignoring "noauto" for root device
	[  +0.043090] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000792] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +3.110509] systemd-fstab-generator[482]: Ignoring "noauto" for root device
	[  +0.074666] systemd-fstab-generator[494]: Ignoring "noauto" for root device
	[  +0.418795] systemd-fstab-generator[667]: Ignoring "noauto" for root device
	[  +0.183648] systemd-fstab-generator[704]: Ignoring "noauto" for root device
	[  +0.073331] systemd-fstab-generator[715]: Ignoring "noauto" for root device
	[  +0.088908] systemd-fstab-generator[728]: Ignoring "noauto" for root device
	[  +1.149460] kauditd_printk_skb: 17 callbacks suppressed
	[  +0.104006] systemd-fstab-generator[917]: Ignoring "noauto" for root device
	[  +0.078468] systemd-fstab-generator[928]: Ignoring "noauto" for root device
	[  +0.058376] systemd-fstab-generator[939]: Ignoring "noauto" for root device
	[  +0.070842] systemd-fstab-generator[950]: Ignoring "noauto" for root device
	[  +0.085054] systemd-fstab-generator[991]: Ignoring "noauto" for root device
	[Sep25 10:34] systemd-fstab-generator[1098]: Ignoring "noauto" for root device
	[  +2.191489] kauditd_printk_skb: 29 callbacks suppressed
	[  +2.399489] systemd-fstab-generator[1471]: Ignoring "noauto" for root device
	[  +5.122490] systemd-fstab-generator[2347]: Ignoring "noauto" for root device
	[ +14.463207] kauditd_printk_skb: 41 callbacks suppressed
	[  +6.798894] kauditd_printk_skb: 21 callbacks suppressed
	[  +4.810513] TCP: eth0: Driver has suspect GRO implementation, TCP performance may be compromised.
	[  +3.498700] kauditd_printk_skb: 12 callbacks suppressed
	[Sep25 10:52] kauditd_printk_skb: 5 callbacks suppressed
	
	* 
	* ==> etcd [202a7fdac825] <==
	* {"level":"info","ts":"2023-09-25T10:34:09.756472Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6e03e7863b4f9c54","local-member-id":"c46d288d2fcb0590","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-25T10:34:09.756619Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-25T10:34:09.756498Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-25T10:34:09.756725Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-25T10:34:09.756515Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-25T10:34:09.757894Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-09-25T10:34:09.756532Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-25T10:34:09.758174Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-09-25T10:34:09.757894Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.2:2379"}
	{"level":"info","ts":"2023-09-25T10:34:31.937317Z","caller":"traceutil/trace.go:171","msg":"trace[667548922] transaction","detail":"{read_only:false; response_revision:416; number_of_response:1; }","duration":"126.937574ms","start":"2023-09-25T10:34:31.810371Z","end":"2023-09-25T10:34:31.937309Z","steps":["trace[667548922] 'process raft request'  (duration: 126.824018ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-25T10:34:36.99882Z","caller":"traceutil/trace.go:171","msg":"trace[243510449] linearizableReadLoop","detail":"{readStateIndex:482; appliedIndex:481; }","duration":"165.982552ms","start":"2023-09-25T10:34:36.832829Z","end":"2023-09-25T10:34:36.998811Z","steps":["trace[243510449] 'read index received'  (duration: 165.770762ms)","trace[243510449] 'applied index is now lower than readState.Index'  (duration: 211.209µs)"],"step_count":2}
	{"level":"warn","ts":"2023-09-25T10:34:36.998969Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"166.151453ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2023-09-25T10:34:36.999019Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"123.796797ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-5dd5756b68-nj9v5\" ","response":"range_response_count:1 size:5002"}
	{"level":"info","ts":"2023-09-25T10:34:36.999045Z","caller":"traceutil/trace.go:171","msg":"trace[2057756314] range","detail":"{range_begin:/registry/pods/kube-system/coredns-5dd5756b68-nj9v5; range_end:; response_count:1; response_revision:469; }","duration":"123.811177ms","start":"2023-09-25T10:34:36.875219Z","end":"2023-09-25T10:34:36.99903Z","steps":["trace[2057756314] 'agreement among raft nodes before linearized reading'  (duration: 123.788776ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-25T10:34:36.999164Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"162.803156ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:6 size:31393"}
	{"level":"info","ts":"2023-09-25T10:34:36.999205Z","caller":"traceutil/trace.go:171","msg":"trace[1483278895] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:6; response_revision:469; }","duration":"162.834825ms","start":"2023-09-25T10:34:36.836356Z","end":"2023-09-25T10:34:36.99919Z","steps":["trace[1483278895] 'agreement among raft nodes before linearized reading'  (duration: 162.701625ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-25T10:34:36.999Z","caller":"traceutil/trace.go:171","msg":"trace[3634572] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:0; response_revision:469; }","duration":"166.183579ms","start":"2023-09-25T10:34:36.832812Z","end":"2023-09-25T10:34:36.998995Z","steps":["trace[3634572] 'agreement among raft nodes before linearized reading'  (duration: 166.053912ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-25T10:34:36.998947Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"163.574471ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:6 size:31393"}
	{"level":"info","ts":"2023-09-25T10:34:36.999285Z","caller":"traceutil/trace.go:171","msg":"trace[819315326] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:6; response_revision:469; }","duration":"163.922954ms","start":"2023-09-25T10:34:36.83536Z","end":"2023-09-25T10:34:36.999283Z","steps":["trace[819315326] 'agreement among raft nodes before linearized reading'  (duration: 163.541307ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-25T10:44:09.779305Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":608}
	{"level":"info","ts":"2023-09-25T10:44:09.779775Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":608,"took":"346.08µs","hash":977468107}
	{"level":"info","ts":"2023-09-25T10:44:09.779794Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":977468107,"revision":608,"compact-revision":-1}
	{"level":"info","ts":"2023-09-25T10:49:09.783821Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":698}
	{"level":"info","ts":"2023-09-25T10:49:09.784257Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":698,"took":"244.664µs","hash":3592134345}
	{"level":"info","ts":"2023-09-25T10:49:09.784273Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3592134345,"revision":698,"compact-revision":608}
	
	* 
	* ==> gcp-auth [f0ceeef2fd99] <==
	* 2023/09/25 10:34:44 GCP Auth Webhook started!
	2023/09/25 10:52:28 Ready to marshal response ...
	2023/09/25 10:52:28 Ready to write response ...
	2023/09/25 10:52:28 Ready to marshal response ...
	2023/09/25 10:52:28 Ready to write response ...
	2023/09/25 10:52:28 Ready to marshal response ...
	2023/09/25 10:52:28 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  10:54:04 up 20 min,  0 users,  load average: 0.32, 0.23, 0.15
	Linux addons-183000 5.10.57 #1 SMP PREEMPT Mon Sep 18 20:10:16 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [e38f0c6d58f7] <==
	* I0925 10:34:10.471973       1 autoregister_controller.go:141] Starting autoregister controller
	I0925 10:34:10.471976       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0925 10:34:10.471978       1 cache.go:39] Caches are synced for autoregister controller
	I0925 10:34:11.347033       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0925 10:34:11.348323       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0925 10:34:11.348329       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0925 10:34:11.481757       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0925 10:34:11.494186       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0925 10:34:11.536039       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0925 10:34:11.538110       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.105.2]
	I0925 10:34:11.538484       1 controller.go:624] quota admission added evaluator for: endpoints
	I0925 10:34:11.539858       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0925 10:34:12.380709       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0925 10:34:12.885080       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0925 10:34:12.893075       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0925 10:34:12.904077       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0925 10:34:26.498134       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0925 10:34:26.509156       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I0925 10:34:27.526494       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0925 10:34:34.002889       1 alloc.go:330] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.108.108.100"}
	I0925 10:34:34.022823       1 controller.go:624] quota admission added evaluator for: jobs.batch
	I0925 10:39:10.399639       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0925 10:44:10.399758       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0925 10:49:10.400399       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0925 10:52:28.085866       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.104.63.194"}
	
	* 
	* ==> kube-controller-manager [e24563a55274] <==
	* I0925 10:34:43.451889       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0925 10:34:43.452265       1 event.go:307] "Event occurred" object="gcp-auth/gcp-auth-certs-patch" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0925 10:34:45.326036       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="gcp-auth/gcp-auth-d4c87556c" duration="3.706108ms"
	I0925 10:34:45.326133       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="gcp-auth/gcp-auth-d4c87556c" duration="15.863µs"
	I0925 10:34:56.694544       1 resource_quota_monitor.go:210] "QuotaMonitor created object count evaluator" resource="traces.gadget.kinvolk.io"
	I0925 10:34:56.694776       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0925 10:34:56.795338       1 shared_informer.go:318] Caches are synced for resource quota
	I0925 10:34:57.011337       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0925 10:34:57.011353       1 shared_informer.go:318] Caches are synced for garbage collector
	I0925 10:35:13.027887       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0925 10:35:13.028052       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0925 10:35:13.045569       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0925 10:35:13.045933       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0925 10:52:28.099991       1 event.go:307] "Event occurred" object="headlamp/headlamp" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set headlamp-58b88cff49 to 1"
	I0925 10:52:28.105802       1 event.go:307] "Event occurred" object="headlamp/headlamp-58b88cff49" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"headlamp-58b88cff49-\" is forbidden: error looking up service account headlamp/headlamp: serviceaccount \"headlamp\" not found"
	I0925 10:52:28.109214       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-58b88cff49" duration="9.4942ms"
	E0925 10:52:28.109254       1 replica_set.go:557] sync "headlamp/headlamp-58b88cff49" failed with pods "headlamp-58b88cff49-" is forbidden: error looking up service account headlamp/headlamp: serviceaccount "headlamp" not found
	I0925 10:52:28.128268       1 event.go:307] "Event occurred" object="headlamp/headlamp-58b88cff49" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: headlamp-58b88cff49-kdgv2"
	I0925 10:52:28.135552       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-58b88cff49" duration="26.278716ms"
	I0925 10:52:28.150301       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-58b88cff49" duration="14.504339ms"
	I0925 10:52:28.150527       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-58b88cff49" duration="57.791µs"
	I0925 10:52:28.150940       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-58b88cff49" duration="13.416µs"
	I0925 10:52:34.675553       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-58b88cff49" duration="34.041µs"
	I0925 10:52:34.688168       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-58b88cff49" duration="3.451285ms"
	I0925 10:52:34.688290       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-58b88cff49" duration="34.875µs"
	
	* 
	* ==> kube-proxy [fff72387d957] <==
	* I0925 10:34:27.163880       1 server_others.go:69] "Using iptables proxy"
	I0925 10:34:27.181208       1 node.go:141] Successfully retrieved node IP: 192.168.105.2
	I0925 10:34:27.228178       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0925 10:34:27.228201       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0925 10:34:27.231917       1 server_others.go:152] "Using iptables Proxier"
	I0925 10:34:27.231983       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0925 10:34:27.232100       1 server.go:846] "Version info" version="v1.28.2"
	I0925 10:34:27.232211       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0925 10:34:27.232663       1 config.go:188] "Starting service config controller"
	I0925 10:34:27.232700       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0925 10:34:27.232734       1 config.go:97] "Starting endpoint slice config controller"
	I0925 10:34:27.232760       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0925 10:34:27.233047       1 config.go:315] "Starting node config controller"
	I0925 10:34:27.233085       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0925 10:34:27.333424       1 shared_informer.go:318] Caches are synced for node config
	I0925 10:34:27.333462       1 shared_informer.go:318] Caches are synced for service config
	I0925 10:34:27.333490       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [5a87dfcd0e1a] <==
	* W0925 10:34:10.412769       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0925 10:34:10.413000       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0925 10:34:10.412552       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0925 10:34:10.413020       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0925 10:34:10.412572       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0925 10:34:10.413082       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0925 10:34:10.412878       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0925 10:34:10.413107       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0925 10:34:11.233945       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0925 10:34:11.233969       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0925 10:34:11.245555       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0925 10:34:11.245565       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0925 10:34:11.257234       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0925 10:34:11.257245       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0925 10:34:11.305366       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0925 10:34:11.305376       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0925 10:34:11.335532       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0925 10:34:11.335546       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0925 10:34:11.379250       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0925 10:34:11.379349       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0925 10:34:11.401540       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0925 10:34:11.401585       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0925 10:34:11.494359       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0925 10:34:11.494379       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0925 10:34:13.407721       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-09-25 10:33:55 UTC, ends at Mon 2023-09-25 10:54:04 UTC. --
	Sep 25 10:49:12 addons-183000 kubelet[2366]: W0925 10:49:12.970035    2366 machine.go:65] Cannot read vendor id correctly, set empty.
	Sep 25 10:50:12 addons-183000 kubelet[2366]: E0925 10:50:12.960817    2366 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 25 10:50:12 addons-183000 kubelet[2366]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 25 10:50:12 addons-183000 kubelet[2366]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 25 10:50:12 addons-183000 kubelet[2366]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 25 10:51:12 addons-183000 kubelet[2366]: E0925 10:51:12.961216    2366 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 25 10:51:12 addons-183000 kubelet[2366]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 25 10:51:12 addons-183000 kubelet[2366]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 25 10:51:12 addons-183000 kubelet[2366]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 25 10:52:12 addons-183000 kubelet[2366]: E0925 10:52:12.961551    2366 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 25 10:52:12 addons-183000 kubelet[2366]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 25 10:52:12 addons-183000 kubelet[2366]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 25 10:52:12 addons-183000 kubelet[2366]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 25 10:52:28 addons-183000 kubelet[2366]: I0925 10:52:28.131682    2366 topology_manager.go:215] "Topology Admit Handler" podUID="f0f974cd-0799-485f-984a-d6be7c88ad59" podNamespace="headlamp" podName="headlamp-58b88cff49-kdgv2"
	Sep 25 10:52:28 addons-183000 kubelet[2366]: E0925 10:52:28.131717    2366 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="eaf92ddd-73b0-4011-a202-69967aa1b507" containerName="create"
	Sep 25 10:52:28 addons-183000 kubelet[2366]: E0925 10:52:28.131721    2366 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9e977507-3f09-4aa5-be19-ff0e470c3b62" containerName="patch"
	Sep 25 10:52:28 addons-183000 kubelet[2366]: I0925 10:52:28.131734    2366 memory_manager.go:346] "RemoveStaleState removing state" podUID="eaf92ddd-73b0-4011-a202-69967aa1b507" containerName="create"
	Sep 25 10:52:28 addons-183000 kubelet[2366]: I0925 10:52:28.131737    2366 memory_manager.go:346] "RemoveStaleState removing state" podUID="9e977507-3f09-4aa5-be19-ff0e470c3b62" containerName="patch"
	Sep 25 10:52:28 addons-183000 kubelet[2366]: I0925 10:52:28.142096    2366 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/f0f974cd-0799-485f-984a-d6be7c88ad59-gcp-creds\") pod \"headlamp-58b88cff49-kdgv2\" (UID: \"f0f974cd-0799-485f-984a-d6be7c88ad59\") " pod="headlamp/headlamp-58b88cff49-kdgv2"
	Sep 25 10:52:28 addons-183000 kubelet[2366]: I0925 10:52:28.142134    2366 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vjkl\" (UniqueName: \"kubernetes.io/projected/f0f974cd-0799-485f-984a-d6be7c88ad59-kube-api-access-6vjkl\") pod \"headlamp-58b88cff49-kdgv2\" (UID: \"f0f974cd-0799-485f-984a-d6be7c88ad59\") " pod="headlamp/headlamp-58b88cff49-kdgv2"
	Sep 25 10:52:34 addons-183000 kubelet[2366]: I0925 10:52:34.684006    2366 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="headlamp/headlamp-58b88cff49-kdgv2" podStartSLOduration=1.357655498 podCreationTimestamp="2023-09-25 10:52:28 +0000 UTC" firstStartedPulling="2023-09-25 10:52:28.588947863 +0000 UTC m=+1095.714927064" lastFinishedPulling="2023-09-25 10:52:33.915275559 +0000 UTC m=+1101.041254718" observedRunningTime="2023-09-25 10:52:34.675483103 +0000 UTC m=+1101.801462304" watchObservedRunningTime="2023-09-25 10:52:34.683983152 +0000 UTC m=+1101.809962311"
	Sep 25 10:53:12 addons-183000 kubelet[2366]: E0925 10:53:12.961446    2366 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 25 10:53:12 addons-183000 kubelet[2366]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 25 10:53:12 addons-183000 kubelet[2366]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 25 10:53:12 addons-183000 kubelet[2366]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p addons-183000 -n addons-183000
helpers_test.go:261: (dbg) Run:  kubectl --context addons-183000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/CloudSpanner FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/CloudSpanner (817.77s)

                                                
                                    
x
+
TestCertOptions (9.9s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-830000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-830000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.636663417s)

                                                
                                                
-- stdout --
	* [cert-options-830000] minikube v1.31.2 on Darwin 13.6 (arm64)
	  - MINIKUBE_LOCATION=17297
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17297-1010/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17297-1010/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node cert-options-830000 in cluster cert-options-830000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-830000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-830000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-830000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-830000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-830000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 89 (76.095333ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p cert-options-830000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-830000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 89
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-830000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-830000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-830000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 89 (36.885584ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p cert-options-830000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-830000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 89
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p cert-options-830000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2023-09-25 04:21:34.928283 -0700 PDT m=+2894.804036543
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-830000 -n cert-options-830000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-830000 -n cert-options-830000: exit status 7 (27.935666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-830000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-830000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-830000
--- FAIL: TestCertOptions (9.90s)

                                                
                                    
x
+
TestCertExpiration (195.18s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-627000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-627000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (9.783309542s)

                                                
                                                
-- stdout --
	* [cert-expiration-627000] minikube v1.31.2 on Darwin 13.6 (arm64)
	  - MINIKUBE_LOCATION=17297
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17297-1010/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17297-1010/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node cert-expiration-627000 in cluster cert-expiration-627000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-627000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-627000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-627000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
E0925 04:21:33.199562    1469 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/ingress-addon-legacy-907000/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-627000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-627000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.229134667s)

                                                
                                                
-- stdout --
	* [cert-expiration-627000] minikube v1.31.2 on Darwin 13.6 (arm64)
	  - MINIKUBE_LOCATION=17297
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17297-1010/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17297-1010/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node cert-expiration-627000 in cluster cert-expiration-627000
	* Restarting existing qemu2 VM for "cert-expiration-627000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-627000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-627000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-627000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-627000] minikube v1.31.2 on Darwin 13.6 (arm64)
	  - MINIKUBE_LOCATION=17297
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17297-1010/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17297-1010/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node cert-expiration-627000 in cluster cert-expiration-627000
	* Restarting existing qemu2 VM for "cert-expiration-627000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-627000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-627000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2023-09-25 04:24:35.047991 -0700 PDT m=+3074.923601668
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-627000 -n cert-expiration-627000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-627000 -n cert-expiration-627000: exit status 7 (65.621708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-627000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-627000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-627000
--- FAIL: TestCertExpiration (195.18s)

                                                
                                    
x
+
TestDockerFlags (10.37s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-933000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-933000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.127025s)

                                                
                                                
-- stdout --
	* [docker-flags-933000] minikube v1.31.2 on Darwin 13.6 (arm64)
	  - MINIKUBE_LOCATION=17297
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17297-1010/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17297-1010/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node docker-flags-933000 in cluster docker-flags-933000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-933000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 04:21:14.804119    4883 out.go:296] Setting OutFile to fd 1 ...
	I0925 04:21:14.804234    4883 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 04:21:14.804237    4883 out.go:309] Setting ErrFile to fd 2...
	I0925 04:21:14.804239    4883 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 04:21:14.804367    4883 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17297-1010/.minikube/bin
	I0925 04:21:14.805410    4883 out.go:303] Setting JSON to false
	I0925 04:21:14.820887    4883 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3049,"bootTime":1695637825,"procs":411,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.6","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0925 04:21:14.820978    4883 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0925 04:21:14.826260    4883 out.go:177] * [docker-flags-933000] minikube v1.31.2 on Darwin 13.6 (arm64)
	I0925 04:21:14.834231    4883 out.go:177]   - MINIKUBE_LOCATION=17297
	I0925 04:21:14.837178    4883 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17297-1010/kubeconfig
	I0925 04:21:14.834289    4883 notify.go:220] Checking for updates...
	I0925 04:21:14.843183    4883 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0925 04:21:14.844581    4883 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 04:21:14.848179    4883 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17297-1010/.minikube
	I0925 04:21:14.851217    4883 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0925 04:21:14.854561    4883 config.go:182] Loaded profile config "force-systemd-flag-662000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0925 04:21:14.854626    4883 config.go:182] Loaded profile config "multinode-352000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0925 04:21:14.854661    4883 driver.go:373] Setting default libvirt URI to qemu:///system
	I0925 04:21:14.859154    4883 out.go:177] * Using the qemu2 driver based on user configuration
	I0925 04:21:14.866192    4883 start.go:298] selected driver: qemu2
	I0925 04:21:14.866200    4883 start.go:902] validating driver "qemu2" against <nil>
	I0925 04:21:14.866208    4883 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0925 04:21:14.868241    4883 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0925 04:21:14.871241    4883 out.go:177] * Automatically selected the socket_vmnet network
	I0925 04:21:14.874222    4883 start_flags.go:917] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0925 04:21:14.874241    4883 cni.go:84] Creating CNI manager for ""
	I0925 04:21:14.874249    4883 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0925 04:21:14.874253    4883 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0925 04:21:14.874259    4883 start_flags.go:321] config:
	{Name:docker-flags-933000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:docker-flags-933000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/ru
n/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0925 04:21:14.878751    4883 iso.go:125] acquiring lock: {Name:mkf881a60cf9fd1672567914305ff6f7a4f13809 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 04:21:14.886190    4883 out.go:177] * Starting control plane node docker-flags-933000 in cluster docker-flags-933000
	I0925 04:21:14.890193    4883 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0925 04:21:14.890212    4883 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I0925 04:21:14.890227    4883 cache.go:57] Caching tarball of preloaded images
	I0925 04:21:14.890285    4883 preload.go:174] Found /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0925 04:21:14.890291    4883 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I0925 04:21:14.890361    4883 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/docker-flags-933000/config.json ...
	I0925 04:21:14.890380    4883 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/docker-flags-933000/config.json: {Name:mkbed7bbdddd4aa24d85aa0d0cf05273b6909852 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 04:21:14.890580    4883 start.go:365] acquiring machines lock for docker-flags-933000: {Name:mk918d99818e2cf21e5912bc291ed18d4b442ba6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 04:21:14.890610    4883 start.go:369] acquired machines lock for "docker-flags-933000" in 23.667µs
	I0925 04:21:14.890626    4883 start.go:93] Provisioning new machine with config: &{Name:docker-flags-933000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root
SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:docker-flags-933000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion
:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 04:21:14.890657    4883 start.go:125] createHost starting for "" (driver="qemu2")
	I0925 04:21:14.895166    4883 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0925 04:21:14.910113    4883 start.go:159] libmachine.API.Create for "docker-flags-933000" (driver="qemu2")
	I0925 04:21:14.910145    4883 client.go:168] LocalClient.Create starting
	I0925 04:21:14.910194    4883 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/ca.pem
	I0925 04:21:14.910220    4883 main.go:141] libmachine: Decoding PEM data...
	I0925 04:21:14.910230    4883 main.go:141] libmachine: Parsing certificate...
	I0925 04:21:14.910267    4883 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/cert.pem
	I0925 04:21:14.910285    4883 main.go:141] libmachine: Decoding PEM data...
	I0925 04:21:14.910291    4883 main.go:141] libmachine: Parsing certificate...
	I0925 04:21:14.910597    4883 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17297-1010/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I0925 04:21:15.032322    4883 main.go:141] libmachine: Creating SSH key...
	I0925 04:21:15.435368    4883 main.go:141] libmachine: Creating Disk image...
	I0925 04:21:15.435378    4883 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0925 04:21:15.435538    4883 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/docker-flags-933000/disk.qcow2.raw /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/docker-flags-933000/disk.qcow2
	I0925 04:21:15.444254    4883 main.go:141] libmachine: STDOUT: 
	I0925 04:21:15.444279    4883 main.go:141] libmachine: STDERR: 
	I0925 04:21:15.444336    4883 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/docker-flags-933000/disk.qcow2 +20000M
	I0925 04:21:15.451601    4883 main.go:141] libmachine: STDOUT: Image resized.
	
	I0925 04:21:15.451615    4883 main.go:141] libmachine: STDERR: 
	I0925 04:21:15.451638    4883 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/docker-flags-933000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/docker-flags-933000/disk.qcow2
	I0925 04:21:15.451644    4883 main.go:141] libmachine: Starting QEMU VM...
	I0925 04:21:15.451679    4883 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/docker-flags-933000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/docker-flags-933000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/docker-flags-933000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:94:54:7b:f2:f5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/docker-flags-933000/disk.qcow2
	I0925 04:21:15.453174    4883 main.go:141] libmachine: STDOUT: 
	I0925 04:21:15.453187    4883 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 04:21:15.453206    4883 client.go:171] LocalClient.Create took 543.056917ms
	I0925 04:21:17.455416    4883 start.go:128] duration metric: createHost completed in 2.564730916s
	I0925 04:21:17.455492    4883 start.go:83] releasing machines lock for "docker-flags-933000", held for 2.564870667s
	W0925 04:21:17.455576    4883 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 04:21:17.474773    4883 out.go:177] * Deleting "docker-flags-933000" in qemu2 ...
	W0925 04:21:17.491185    4883 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 04:21:17.491219    4883 start.go:703] Will try again in 5 seconds ...
	I0925 04:21:22.493437    4883 start.go:365] acquiring machines lock for docker-flags-933000: {Name:mk918d99818e2cf21e5912bc291ed18d4b442ba6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 04:21:22.493830    4883 start.go:369] acquired machines lock for "docker-flags-933000" in 314.291µs
	I0925 04:21:22.493951    4883 start.go:93] Provisioning new machine with config: &{Name:docker-flags-933000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root
SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:docker-flags-933000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion
:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 04:21:22.494225    4883 start.go:125] createHost starting for "" (driver="qemu2")
	I0925 04:21:22.503711    4883 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0925 04:21:22.550628    4883 start.go:159] libmachine.API.Create for "docker-flags-933000" (driver="qemu2")
	I0925 04:21:22.550663    4883 client.go:168] LocalClient.Create starting
	I0925 04:21:22.550779    4883 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/ca.pem
	I0925 04:21:22.550836    4883 main.go:141] libmachine: Decoding PEM data...
	I0925 04:21:22.550858    4883 main.go:141] libmachine: Parsing certificate...
	I0925 04:21:22.550924    4883 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/cert.pem
	I0925 04:21:22.550958    4883 main.go:141] libmachine: Decoding PEM data...
	I0925 04:21:22.550972    4883 main.go:141] libmachine: Parsing certificate...
	I0925 04:21:22.551433    4883 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17297-1010/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I0925 04:21:22.728977    4883 main.go:141] libmachine: Creating SSH key...
	I0925 04:21:22.848350    4883 main.go:141] libmachine: Creating Disk image...
	I0925 04:21:22.848356    4883 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0925 04:21:22.848492    4883 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/docker-flags-933000/disk.qcow2.raw /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/docker-flags-933000/disk.qcow2
	I0925 04:21:22.857231    4883 main.go:141] libmachine: STDOUT: 
	I0925 04:21:22.857252    4883 main.go:141] libmachine: STDERR: 
	I0925 04:21:22.857316    4883 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/docker-flags-933000/disk.qcow2 +20000M
	I0925 04:21:22.864470    4883 main.go:141] libmachine: STDOUT: Image resized.
	
	I0925 04:21:22.864481    4883 main.go:141] libmachine: STDERR: 
	I0925 04:21:22.864500    4883 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/docker-flags-933000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/docker-flags-933000/disk.qcow2
	I0925 04:21:22.864511    4883 main.go:141] libmachine: Starting QEMU VM...
	I0925 04:21:22.864547    4883 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/docker-flags-933000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/docker-flags-933000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/docker-flags-933000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:ea:f8:4a:e9:40 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/docker-flags-933000/disk.qcow2
	I0925 04:21:22.866029    4883 main.go:141] libmachine: STDOUT: 
	I0925 04:21:22.866043    4883 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 04:21:22.866065    4883 client.go:171] LocalClient.Create took 315.397667ms
	I0925 04:21:24.868256    4883 start.go:128] duration metric: createHost completed in 2.373969541s
	I0925 04:21:24.868331    4883 start.go:83] releasing machines lock for "docker-flags-933000", held for 2.374473s
	W0925 04:21:24.868768    4883 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-933000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-933000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 04:21:24.875196    4883 out.go:177] 
	W0925 04:21:24.880073    4883 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0925 04:21:24.880119    4883 out.go:239] * 
	* 
	W0925 04:21:24.882976    4883 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0925 04:21:24.891169    4883 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-933000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-933000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-933000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 89 (73.675709ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p docker-flags-933000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-933000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 89
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p docker-flags-933000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p docker-flags-933000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-933000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-933000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 89 (41.679667ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p docker-flags-933000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-933000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 89
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-933000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p docker-flags-933000\"\n"
panic.go:523: *** TestDockerFlags FAILED at 2023-09-25 04:21:25.022758 -0700 PDT m=+2884.898519126
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-933000 -n docker-flags-933000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-933000 -n docker-flags-933000: exit status 7 (26.937625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-933000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-933000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-933000
--- FAIL: TestDockerFlags (10.37s)

                                                
                                    
x
+
TestForceSystemdFlag (11.73s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-662000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-662000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (11.531043625s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-662000] minikube v1.31.2 on Darwin 13.6 (arm64)
	  - MINIKUBE_LOCATION=17297
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17297-1010/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17297-1010/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node force-systemd-flag-662000 in cluster force-systemd-flag-662000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-662000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 04:21:08.322547    4861 out.go:296] Setting OutFile to fd 1 ...
	I0925 04:21:08.322678    4861 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 04:21:08.322682    4861 out.go:309] Setting ErrFile to fd 2...
	I0925 04:21:08.322684    4861 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 04:21:08.322807    4861 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17297-1010/.minikube/bin
	I0925 04:21:08.323884    4861 out.go:303] Setting JSON to false
	I0925 04:21:08.339140    4861 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3043,"bootTime":1695637825,"procs":411,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.6","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0925 04:21:08.339207    4861 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0925 04:21:08.344817    4861 out.go:177] * [force-systemd-flag-662000] minikube v1.31.2 on Darwin 13.6 (arm64)
	I0925 04:21:08.356796    4861 out.go:177]   - MINIKUBE_LOCATION=17297
	I0925 04:21:08.351705    4861 notify.go:220] Checking for updates...
	I0925 04:21:08.364792    4861 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17297-1010/kubeconfig
	I0925 04:21:08.372739    4861 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0925 04:21:08.375802    4861 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 04:21:08.378832    4861 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17297-1010/.minikube
	I0925 04:21:08.385808    4861 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0925 04:21:08.390250    4861 config.go:182] Loaded profile config "force-systemd-env-179000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0925 04:21:08.390341    4861 config.go:182] Loaded profile config "multinode-352000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0925 04:21:08.390378    4861 driver.go:373] Setting default libvirt URI to qemu:///system
	I0925 04:21:08.393754    4861 out.go:177] * Using the qemu2 driver based on user configuration
	I0925 04:21:08.400811    4861 start.go:298] selected driver: qemu2
	I0925 04:21:08.400817    4861 start.go:902] validating driver "qemu2" against <nil>
	I0925 04:21:08.400823    4861 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0925 04:21:08.403058    4861 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0925 04:21:08.404538    4861 out.go:177] * Automatically selected the socket_vmnet network
	I0925 04:21:08.407830    4861 start_flags.go:904] Wait components to verify : map[apiserver:true system_pods:true]
	I0925 04:21:08.407849    4861 cni.go:84] Creating CNI manager for ""
	I0925 04:21:08.407856    4861 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0925 04:21:08.407863    4861 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0925 04:21:08.407868    4861 start_flags.go:321] config:
	{Name:force-systemd-flag-662000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:force-systemd-flag-662000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0925 04:21:08.412360    4861 iso.go:125] acquiring lock: {Name:mkf881a60cf9fd1672567914305ff6f7a4f13809 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 04:21:08.415883    4861 out.go:177] * Starting control plane node force-systemd-flag-662000 in cluster force-systemd-flag-662000
	I0925 04:21:08.423847    4861 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0925 04:21:08.423868    4861 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I0925 04:21:08.423881    4861 cache.go:57] Caching tarball of preloaded images
	I0925 04:21:08.423962    4861 preload.go:174] Found /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0925 04:21:08.423968    4861 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I0925 04:21:08.424034    4861 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/force-systemd-flag-662000/config.json ...
	I0925 04:21:08.424047    4861 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/force-systemd-flag-662000/config.json: {Name:mkbaa3d5be38df16340330a83b14e8163315e8cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 04:21:08.424250    4861 start.go:365] acquiring machines lock for force-systemd-flag-662000: {Name:mk918d99818e2cf21e5912bc291ed18d4b442ba6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 04:21:08.424283    4861 start.go:369] acquired machines lock for "force-systemd-flag-662000" in 25.875µs
	I0925 04:21:08.424294    4861 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-662000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:force-systemd-flag-662000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mount
UID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 04:21:08.424325    4861 start.go:125] createHost starting for "" (driver="qemu2")
	I0925 04:21:08.427766    4861 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0925 04:21:08.444897    4861 start.go:159] libmachine.API.Create for "force-systemd-flag-662000" (driver="qemu2")
	I0925 04:21:08.444922    4861 client.go:168] LocalClient.Create starting
	I0925 04:21:08.444989    4861 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/ca.pem
	I0925 04:21:08.445018    4861 main.go:141] libmachine: Decoding PEM data...
	I0925 04:21:08.445027    4861 main.go:141] libmachine: Parsing certificate...
	I0925 04:21:08.445072    4861 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/cert.pem
	I0925 04:21:08.445093    4861 main.go:141] libmachine: Decoding PEM data...
	I0925 04:21:08.445101    4861 main.go:141] libmachine: Parsing certificate...
	I0925 04:21:08.445408    4861 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17297-1010/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I0925 04:21:08.560171    4861 main.go:141] libmachine: Creating SSH key...
	I0925 04:21:08.702325    4861 main.go:141] libmachine: Creating Disk image...
	I0925 04:21:08.702333    4861 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0925 04:21:08.702477    4861 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/force-systemd-flag-662000/disk.qcow2.raw /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/force-systemd-flag-662000/disk.qcow2
	I0925 04:21:08.711063    4861 main.go:141] libmachine: STDOUT: 
	I0925 04:21:08.711083    4861 main.go:141] libmachine: STDERR: 
	I0925 04:21:08.711134    4861 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/force-systemd-flag-662000/disk.qcow2 +20000M
	I0925 04:21:08.718425    4861 main.go:141] libmachine: STDOUT: Image resized.
	
	I0925 04:21:08.718449    4861 main.go:141] libmachine: STDERR: 
	I0925 04:21:08.718474    4861 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/force-systemd-flag-662000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/force-systemd-flag-662000/disk.qcow2
	I0925 04:21:08.718482    4861 main.go:141] libmachine: Starting QEMU VM...
	I0925 04:21:08.718520    4861 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/force-systemd-flag-662000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/force-systemd-flag-662000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/force-systemd-flag-662000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:a5:96:ac:6b:25 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/force-systemd-flag-662000/disk.qcow2
	I0925 04:21:08.720117    4861 main.go:141] libmachine: STDOUT: 
	I0925 04:21:08.720129    4861 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 04:21:08.720148    4861 client.go:171] LocalClient.Create took 275.219542ms
	I0925 04:21:10.722373    4861 start.go:128] duration metric: createHost completed in 2.298027667s
	I0925 04:21:10.722423    4861 start.go:83] releasing machines lock for "force-systemd-flag-662000", held for 2.298127958s
	W0925 04:21:10.722468    4861 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 04:21:10.729735    4861 out.go:177] * Deleting "force-systemd-flag-662000" in qemu2 ...
	W0925 04:21:10.749483    4861 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 04:21:10.749510    4861 start.go:703] Will try again in 5 seconds ...
	I0925 04:21:15.751775    4861 start.go:365] acquiring machines lock for force-systemd-flag-662000: {Name:mk918d99818e2cf21e5912bc291ed18d4b442ba6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 04:21:17.455669    4861 start.go:369] acquired machines lock for "force-systemd-flag-662000" in 1.703766292s
	I0925 04:21:17.455776    4861 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-662000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:force-systemd-flag-662000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mount
UID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 04:21:17.456035    4861 start.go:125] createHost starting for "" (driver="qemu2")
	I0925 04:21:17.465711    4861 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0925 04:21:17.514203    4861 start.go:159] libmachine.API.Create for "force-systemd-flag-662000" (driver="qemu2")
	I0925 04:21:17.514255    4861 client.go:168] LocalClient.Create starting
	I0925 04:21:17.514381    4861 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/ca.pem
	I0925 04:21:17.514447    4861 main.go:141] libmachine: Decoding PEM data...
	I0925 04:21:17.514470    4861 main.go:141] libmachine: Parsing certificate...
	I0925 04:21:17.514534    4861 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/cert.pem
	I0925 04:21:17.514575    4861 main.go:141] libmachine: Decoding PEM data...
	I0925 04:21:17.514588    4861 main.go:141] libmachine: Parsing certificate...
	I0925 04:21:17.515193    4861 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17297-1010/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I0925 04:21:17.646468    4861 main.go:141] libmachine: Creating SSH key...
	I0925 04:21:17.770878    4861 main.go:141] libmachine: Creating Disk image...
	I0925 04:21:17.770884    4861 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0925 04:21:17.771021    4861 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/force-systemd-flag-662000/disk.qcow2.raw /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/force-systemd-flag-662000/disk.qcow2
	I0925 04:21:17.779766    4861 main.go:141] libmachine: STDOUT: 
	I0925 04:21:17.779792    4861 main.go:141] libmachine: STDERR: 
	I0925 04:21:17.779858    4861 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/force-systemd-flag-662000/disk.qcow2 +20000M
	I0925 04:21:17.787017    4861 main.go:141] libmachine: STDOUT: Image resized.
	
	I0925 04:21:17.787040    4861 main.go:141] libmachine: STDERR: 
	I0925 04:21:17.787052    4861 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/force-systemd-flag-662000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/force-systemd-flag-662000/disk.qcow2
	I0925 04:21:17.787057    4861 main.go:141] libmachine: Starting QEMU VM...
	I0925 04:21:17.787106    4861 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/force-systemd-flag-662000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/force-systemd-flag-662000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/force-systemd-flag-662000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:9a:98:6c:f8:cb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/force-systemd-flag-662000/disk.qcow2
	I0925 04:21:17.788664    4861 main.go:141] libmachine: STDOUT: 
	I0925 04:21:17.788685    4861 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 04:21:17.788695    4861 client.go:171] LocalClient.Create took 274.433209ms
	I0925 04:21:19.790999    4861 start.go:128] duration metric: createHost completed in 2.334903541s
	I0925 04:21:19.791070    4861 start.go:83] releasing machines lock for "force-systemd-flag-662000", held for 2.335367333s
	W0925 04:21:19.791492    4861 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-662000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-662000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 04:21:19.801005    4861 out.go:177] 
	W0925 04:21:19.804017    4861 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0925 04:21:19.804049    4861 out.go:239] * 
	* 
	W0925 04:21:19.806935    4861 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0925 04:21:19.814964    4861 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-662000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-662000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-662000 ssh "docker info --format {{.CgroupDriver}}": exit status 89 (74.575833ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p force-systemd-flag-662000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-662000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 89
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2023-09-25 04:21:19.905865 -0700 PDT m=+2879.781630001
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-662000 -n force-systemd-flag-662000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-662000 -n force-systemd-flag-662000: exit status 7 (32.561875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-662000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-662000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-662000
--- FAIL: TestForceSystemdFlag (11.73s)

                                                
                                    
x
+
TestForceSystemdEnv (9.9s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-179000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-179000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.69804175s)

                                                
                                                
-- stdout --
	* [force-systemd-env-179000] minikube v1.31.2 on Darwin 13.6 (arm64)
	  - MINIKUBE_LOCATION=17297
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17297-1010/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17297-1010/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node force-systemd-env-179000 in cluster force-systemd-env-179000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-179000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 04:21:04.905129    4841 out.go:296] Setting OutFile to fd 1 ...
	I0925 04:21:04.905251    4841 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 04:21:04.905254    4841 out.go:309] Setting ErrFile to fd 2...
	I0925 04:21:04.905256    4841 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 04:21:04.905391    4841 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17297-1010/.minikube/bin
	I0925 04:21:04.906476    4841 out.go:303] Setting JSON to false
	I0925 04:21:04.922871    4841 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3039,"bootTime":1695637825,"procs":412,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.6","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0925 04:21:04.922942    4841 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0925 04:21:04.927550    4841 out.go:177] * [force-systemd-env-179000] minikube v1.31.2 on Darwin 13.6 (arm64)
	I0925 04:21:04.935455    4841 out.go:177]   - MINIKUBE_LOCATION=17297
	I0925 04:21:04.938335    4841 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17297-1010/kubeconfig
	I0925 04:21:04.935507    4841 notify.go:220] Checking for updates...
	I0925 04:21:04.944415    4841 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0925 04:21:04.945615    4841 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 04:21:04.948405    4841 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17297-1010/.minikube
	I0925 04:21:04.951456    4841 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0925 04:21:04.954786    4841 config.go:182] Loaded profile config "multinode-352000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0925 04:21:04.954826    4841 driver.go:373] Setting default libvirt URI to qemu:///system
	I0925 04:21:04.959377    4841 out.go:177] * Using the qemu2 driver based on user configuration
	I0925 04:21:04.966457    4841 start.go:298] selected driver: qemu2
	I0925 04:21:04.966467    4841 start.go:902] validating driver "qemu2" against <nil>
	I0925 04:21:04.966477    4841 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0925 04:21:04.968427    4841 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0925 04:21:04.971410    4841 out.go:177] * Automatically selected the socket_vmnet network
	I0925 04:21:04.974483    4841 start_flags.go:904] Wait components to verify : map[apiserver:true system_pods:true]
	I0925 04:21:04.974499    4841 cni.go:84] Creating CNI manager for ""
	I0925 04:21:04.974505    4841 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0925 04:21:04.974508    4841 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0925 04:21:04.974513    4841 start_flags.go:321] config:
	{Name:force-systemd-env-179000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:force-systemd-env-179000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0925 04:21:04.978664    4841 iso.go:125] acquiring lock: {Name:mkf881a60cf9fd1672567914305ff6f7a4f13809 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 04:21:04.985370    4841 out.go:177] * Starting control plane node force-systemd-env-179000 in cluster force-systemd-env-179000
	I0925 04:21:04.988441    4841 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0925 04:21:04.988462    4841 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I0925 04:21:04.988470    4841 cache.go:57] Caching tarball of preloaded images
	I0925 04:21:04.988530    4841 preload.go:174] Found /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0925 04:21:04.988535    4841 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I0925 04:21:04.988598    4841 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/force-systemd-env-179000/config.json ...
	I0925 04:21:04.988611    4841 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/force-systemd-env-179000/config.json: {Name:mk4d00587cc624dd4543a0870a2910266abe1e00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 04:21:04.988790    4841 start.go:365] acquiring machines lock for force-systemd-env-179000: {Name:mk918d99818e2cf21e5912bc291ed18d4b442ba6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 04:21:04.988819    4841 start.go:369] acquired machines lock for "force-systemd-env-179000" in 20.709µs
	I0925 04:21:04.988828    4841 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-179000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kube
rnetesConfig:{KubernetesVersion:v1.28.2 ClusterName:force-systemd-env-179000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUI
D:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 04:21:04.988856    4841 start.go:125] createHost starting for "" (driver="qemu2")
	I0925 04:21:04.996465    4841 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0925 04:21:05.010670    4841 start.go:159] libmachine.API.Create for "force-systemd-env-179000" (driver="qemu2")
	I0925 04:21:05.010696    4841 client.go:168] LocalClient.Create starting
	I0925 04:21:05.010760    4841 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/ca.pem
	I0925 04:21:05.010787    4841 main.go:141] libmachine: Decoding PEM data...
	I0925 04:21:05.010797    4841 main.go:141] libmachine: Parsing certificate...
	I0925 04:21:05.010836    4841 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/cert.pem
	I0925 04:21:05.010854    4841 main.go:141] libmachine: Decoding PEM data...
	I0925 04:21:05.010863    4841 main.go:141] libmachine: Parsing certificate...
	I0925 04:21:05.011214    4841 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17297-1010/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I0925 04:21:05.135439    4841 main.go:141] libmachine: Creating SSH key...
	I0925 04:21:05.196187    4841 main.go:141] libmachine: Creating Disk image...
	I0925 04:21:05.196197    4841 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0925 04:21:05.196364    4841 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/force-systemd-env-179000/disk.qcow2.raw /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/force-systemd-env-179000/disk.qcow2
	I0925 04:21:05.205610    4841 main.go:141] libmachine: STDOUT: 
	I0925 04:21:05.205640    4841 main.go:141] libmachine: STDERR: 
	I0925 04:21:05.205719    4841 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/force-systemd-env-179000/disk.qcow2 +20000M
	I0925 04:21:05.214699    4841 main.go:141] libmachine: STDOUT: Image resized.
	
	I0925 04:21:05.214717    4841 main.go:141] libmachine: STDERR: 
	I0925 04:21:05.214758    4841 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/force-systemd-env-179000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/force-systemd-env-179000/disk.qcow2
	I0925 04:21:05.214770    4841 main.go:141] libmachine: Starting QEMU VM...
	I0925 04:21:05.214814    4841 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/force-systemd-env-179000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/force-systemd-env-179000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/force-systemd-env-179000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:4c:ac:72:2c:08 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/force-systemd-env-179000/disk.qcow2
	I0925 04:21:05.217189    4841 main.go:141] libmachine: STDOUT: 
	I0925 04:21:05.217214    4841 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 04:21:05.217254    4841 client.go:171] LocalClient.Create took 206.551333ms
	I0925 04:21:07.219493    4841 start.go:128] duration metric: createHost completed in 2.230608833s
	I0925 04:21:07.219560    4841 start.go:83] releasing machines lock for "force-systemd-env-179000", held for 2.230729333s
	W0925 04:21:07.219614    4841 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 04:21:07.226796    4841 out.go:177] * Deleting "force-systemd-env-179000" in qemu2 ...
	W0925 04:21:07.246480    4841 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 04:21:07.246522    4841 start.go:703] Will try again in 5 seconds ...
	I0925 04:21:12.248789    4841 start.go:365] acquiring machines lock for force-systemd-env-179000: {Name:mk918d99818e2cf21e5912bc291ed18d4b442ba6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 04:21:12.249303    4841 start.go:369] acquired machines lock for "force-systemd-env-179000" in 381.041µs
	I0925 04:21:12.249564    4841 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-179000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kube
rnetesConfig:{KubernetesVersion:v1.28.2 ClusterName:force-systemd-env-179000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUI
D:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 04:21:12.249855    4841 start.go:125] createHost starting for "" (driver="qemu2")
	I0925 04:21:12.258490    4841 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0925 04:21:12.304147    4841 start.go:159] libmachine.API.Create for "force-systemd-env-179000" (driver="qemu2")
	I0925 04:21:12.304191    4841 client.go:168] LocalClient.Create starting
	I0925 04:21:12.304303    4841 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/ca.pem
	I0925 04:21:12.304375    4841 main.go:141] libmachine: Decoding PEM data...
	I0925 04:21:12.304394    4841 main.go:141] libmachine: Parsing certificate...
	I0925 04:21:12.304453    4841 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/cert.pem
	I0925 04:21:12.304488    4841 main.go:141] libmachine: Decoding PEM data...
	I0925 04:21:12.304502    4841 main.go:141] libmachine: Parsing certificate...
	I0925 04:21:12.304984    4841 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17297-1010/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I0925 04:21:12.432794    4841 main.go:141] libmachine: Creating SSH key...
	I0925 04:21:12.517861    4841 main.go:141] libmachine: Creating Disk image...
	I0925 04:21:12.517868    4841 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0925 04:21:12.518003    4841 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/force-systemd-env-179000/disk.qcow2.raw /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/force-systemd-env-179000/disk.qcow2
	I0925 04:21:12.526517    4841 main.go:141] libmachine: STDOUT: 
	I0925 04:21:12.526530    4841 main.go:141] libmachine: STDERR: 
	I0925 04:21:12.526588    4841 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/force-systemd-env-179000/disk.qcow2 +20000M
	I0925 04:21:12.533768    4841 main.go:141] libmachine: STDOUT: Image resized.
	
	I0925 04:21:12.533792    4841 main.go:141] libmachine: STDERR: 
	I0925 04:21:12.533806    4841 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/force-systemd-env-179000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/force-systemd-env-179000/disk.qcow2
	I0925 04:21:12.533816    4841 main.go:141] libmachine: Starting QEMU VM...
	I0925 04:21:12.533857    4841 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/force-systemd-env-179000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/force-systemd-env-179000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/force-systemd-env-179000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:a4:35:d5:26:81 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/force-systemd-env-179000/disk.qcow2
	I0925 04:21:12.535457    4841 main.go:141] libmachine: STDOUT: 
	I0925 04:21:12.535470    4841 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 04:21:12.535483    4841 client.go:171] LocalClient.Create took 231.284833ms
	I0925 04:21:14.537651    4841 start.go:128] duration metric: createHost completed in 2.287748167s
	I0925 04:21:14.537715    4841 start.go:83] releasing machines lock for "force-systemd-env-179000", held for 2.288386083s
	W0925 04:21:14.538215    4841 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-179000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-179000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 04:21:14.545935    4841 out.go:177] 
	W0925 04:21:14.549979    4841 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0925 04:21:14.550009    4841 out.go:239] * 
	* 
	W0925 04:21:14.552958    4841 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0925 04:21:14.561783    4841 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-179000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-179000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-179000 ssh "docker info --format {{.CgroupDriver}}": exit status 89 (72.227083ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p force-systemd-env-179000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-179000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 89
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2023-09-25 04:21:14.653221 -0700 PDT m=+2874.528990043
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-179000 -n force-systemd-env-179000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-179000 -n force-systemd-env-179000: exit status 7 (31.530333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-179000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-179000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-179000
--- FAIL: TestForceSystemdEnv (9.90s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (34.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1626: (dbg) Run:  kubectl --context functional-742000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-742000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-7799dfb7c6-zgsq6" [6ca1a74b-5f4a-43e9-8824-e36a6b269514] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-7799dfb7c6-zgsq6" [6ca1a74b-5f4a-43e9-8824-e36a6b269514] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 12.008269125s
functional_test.go:1648: (dbg) Run:  out/minikube-darwin-arm64 -p functional-742000 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.105.4:31381
functional_test.go:1660: error fetching http://192.168.105.4:31381: Get "http://192.168.105.4:31381": dial tcp 192.168.105.4:31381: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:31381: Get "http://192.168.105.4:31381": dial tcp 192.168.105.4:31381: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:31381: Get "http://192.168.105.4:31381": dial tcp 192.168.105.4:31381: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:31381: Get "http://192.168.105.4:31381": dial tcp 192.168.105.4:31381: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:31381: Get "http://192.168.105.4:31381": dial tcp 192.168.105.4:31381: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:31381: Get "http://192.168.105.4:31381": dial tcp 192.168.105.4:31381: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:31381: Get "http://192.168.105.4:31381": dial tcp 192.168.105.4:31381: connect: connection refused
functional_test.go:1680: failed to fetch http://192.168.105.4:31381: Get "http://192.168.105.4:31381": dial tcp 192.168.105.4:31381: connect: connection refused
functional_test.go:1597: service test failed - dumping debug information
functional_test.go:1598: -----------------------service failure post-mortem--------------------------------
functional_test.go:1601: (dbg) Run:  kubectl --context functional-742000 describe po hello-node-connect
functional_test.go:1605: hello-node pod describe:
Name:             hello-node-connect-7799dfb7c6-zgsq6
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-742000/192.168.105.4
Start Time:       Mon, 25 Sep 2023 04:09:29 -0700
Labels:           app=hello-node-connect
pod-template-hash=7799dfb7c6
Annotations:      <none>
Status:           Running
IP:               10.244.0.7
IPs:
IP:           10.244.0.7
Controlled By:  ReplicaSet/hello-node-connect-7799dfb7c6
Containers:
echoserver-arm:
Container ID:   docker://e7c4b36376bc5f722a31c8c7a99206c1e7c9b548b4f774fe4f7f3d8416b618e6
Image:          registry.k8s.io/echoserver-arm:1.8
Image ID:       docker-pullable://registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
Port:           <none>
Host Port:      <none>
State:          Terminated
Reason:       Error
Exit Code:    1
Started:      Mon, 25 Sep 2023 04:09:51 -0700
Finished:     Mon, 25 Sep 2023 04:09:51 -0700
Last State:     Terminated
Reason:       Error
Exit Code:    1
Started:      Mon, 25 Sep 2023 04:09:34 -0700
Finished:     Mon, 25 Sep 2023 04:09:34 -0700
Ready:          False
Restart Count:  2
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8xqhj (ro)
Conditions:
Type              Status
Initialized       True 
Ready             False 
ContainersReady   False 
PodScheduled      True 
Volumes:
kube-api-access-8xqhj:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                From               Message
----     ------     ----               ----               -------
Normal   Scheduled  33s                default-scheduler  Successfully assigned default/hello-node-connect-7799dfb7c6-zgsq6 to functional-742000
Normal   Pulling    34s                kubelet            Pulling image "registry.k8s.io/echoserver-arm:1.8"
Normal   Pulled     30s                kubelet            Successfully pulled image "registry.k8s.io/echoserver-arm:1.8" in 3.92s (3.92s including waiting)
Normal   Created    12s (x3 over 30s)  kubelet            Created container echoserver-arm
Normal   Started    12s (x3 over 30s)  kubelet            Started container echoserver-arm
Normal   Pulled     12s (x2 over 29s)  kubelet            Container image "registry.k8s.io/echoserver-arm:1.8" already present on machine
Warning  BackOff    12s (x3 over 28s)  kubelet            Back-off restarting failed container echoserver-arm in pod hello-node-connect-7799dfb7c6-zgsq6_default(6ca1a74b-5f4a-43e9-8824-e36a6b269514)

                                                
                                                
functional_test.go:1607: (dbg) Run:  kubectl --context functional-742000 logs -l app=hello-node-connect
functional_test.go:1611: hello-node logs:
exec /usr/sbin/nginx: exec format error
functional_test.go:1613: (dbg) Run:  kubectl --context functional-742000 describe svc hello-node-connect
functional_test.go:1617: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.111.12.229
IPs:                      10.111.12.229
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  31381/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-742000 -n functional-742000
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p functional-742000 logs -n 25
helpers_test.go:252: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |                                                  Args                                                  |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| cache   | functional-742000 cache reload                                                                         | functional-742000 | jenkins | v1.31.2 | 25 Sep 23 04:08 PDT | 25 Sep 23 04:08 PDT |
	| ssh     | functional-742000 ssh                                                                                  | functional-742000 | jenkins | v1.31.2 | 25 Sep 23 04:08 PDT | 25 Sep 23 04:08 PDT |
	|         | sudo crictl inspecti                                                                                   |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                                                           |                   |         |         |                     |                     |
	| cache   | delete                                                                                                 | minikube          | jenkins | v1.31.2 | 25 Sep 23 04:08 PDT | 25 Sep 23 04:08 PDT |
	|         | registry.k8s.io/pause:3.1                                                                              |                   |         |         |                     |                     |
	| cache   | delete                                                                                                 | minikube          | jenkins | v1.31.2 | 25 Sep 23 04:08 PDT | 25 Sep 23 04:08 PDT |
	|         | registry.k8s.io/pause:latest                                                                           |                   |         |         |                     |                     |
	| kubectl | functional-742000 kubectl --                                                                           | functional-742000 | jenkins | v1.31.2 | 25 Sep 23 04:08 PDT | 25 Sep 23 04:08 PDT |
	|         | --context functional-742000                                                                            |                   |         |         |                     |                     |
	|         | get pods                                                                                               |                   |         |         |                     |                     |
	| start   | -p functional-742000                                                                                   | functional-742000 | jenkins | v1.31.2 | 25 Sep 23 04:08 PDT | 25 Sep 23 04:09 PDT |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision                               |                   |         |         |                     |                     |
	|         | --wait=all                                                                                             |                   |         |         |                     |                     |
	| service | invalid-svc -p                                                                                         | functional-742000 | jenkins | v1.31.2 | 25 Sep 23 04:09 PDT |                     |
	|         | functional-742000                                                                                      |                   |         |         |                     |                     |
	| cp      | functional-742000 cp                                                                                   | functional-742000 | jenkins | v1.31.2 | 25 Sep 23 04:09 PDT | 25 Sep 23 04:09 PDT |
	|         | testdata/cp-test.txt                                                                                   |                   |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                               |                   |         |         |                     |                     |
	| config  | functional-742000 config unset                                                                         | functional-742000 | jenkins | v1.31.2 | 25 Sep 23 04:09 PDT | 25 Sep 23 04:09 PDT |
	|         | cpus                                                                                                   |                   |         |         |                     |                     |
	| config  | functional-742000 config get                                                                           | functional-742000 | jenkins | v1.31.2 | 25 Sep 23 04:09 PDT |                     |
	|         | cpus                                                                                                   |                   |         |         |                     |                     |
	| config  | functional-742000 config set                                                                           | functional-742000 | jenkins | v1.31.2 | 25 Sep 23 04:09 PDT | 25 Sep 23 04:09 PDT |
	|         | cpus 2                                                                                                 |                   |         |         |                     |                     |
	| ssh     | functional-742000 ssh -n                                                                               | functional-742000 | jenkins | v1.31.2 | 25 Sep 23 04:09 PDT | 25 Sep 23 04:09 PDT |
	|         | functional-742000 sudo cat                                                                             |                   |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                               |                   |         |         |                     |                     |
	| config  | functional-742000 config get                                                                           | functional-742000 | jenkins | v1.31.2 | 25 Sep 23 04:09 PDT | 25 Sep 23 04:09 PDT |
	|         | cpus                                                                                                   |                   |         |         |                     |                     |
	| config  | functional-742000 config unset                                                                         | functional-742000 | jenkins | v1.31.2 | 25 Sep 23 04:09 PDT | 25 Sep 23 04:09 PDT |
	|         | cpus                                                                                                   |                   |         |         |                     |                     |
	| cp      | functional-742000 cp functional-742000:/home/docker/cp-test.txt                                        | functional-742000 | jenkins | v1.31.2 | 25 Sep 23 04:09 PDT | 25 Sep 23 04:09 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd2956233808/001/cp-test.txt |                   |         |         |                     |                     |
	| config  | functional-742000 config get                                                                           | functional-742000 | jenkins | v1.31.2 | 25 Sep 23 04:09 PDT |                     |
	|         | cpus                                                                                                   |                   |         |         |                     |                     |
	| ssh     | functional-742000 ssh echo                                                                             | functional-742000 | jenkins | v1.31.2 | 25 Sep 23 04:09 PDT | 25 Sep 23 04:09 PDT |
	|         | hello                                                                                                  |                   |         |         |                     |                     |
	| ssh     | functional-742000 ssh -n                                                                               | functional-742000 | jenkins | v1.31.2 | 25 Sep 23 04:09 PDT | 25 Sep 23 04:09 PDT |
	|         | functional-742000 sudo cat                                                                             |                   |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                               |                   |         |         |                     |                     |
	| ssh     | functional-742000 ssh cat                                                                              | functional-742000 | jenkins | v1.31.2 | 25 Sep 23 04:09 PDT | 25 Sep 23 04:09 PDT |
	|         | /etc/hostname                                                                                          |                   |         |         |                     |                     |
	| tunnel  | functional-742000 tunnel                                                                               | functional-742000 | jenkins | v1.31.2 | 25 Sep 23 04:09 PDT |                     |
	|         | --alsologtostderr                                                                                      |                   |         |         |                     |                     |
	| tunnel  | functional-742000 tunnel                                                                               | functional-742000 | jenkins | v1.31.2 | 25 Sep 23 04:09 PDT |                     |
	|         | --alsologtostderr                                                                                      |                   |         |         |                     |                     |
	| tunnel  | functional-742000 tunnel                                                                               | functional-742000 | jenkins | v1.31.2 | 25 Sep 23 04:09 PDT |                     |
	|         | --alsologtostderr                                                                                      |                   |         |         |                     |                     |
	| addons  | functional-742000 addons list                                                                          | functional-742000 | jenkins | v1.31.2 | 25 Sep 23 04:09 PDT | 25 Sep 23 04:09 PDT |
	| addons  | functional-742000 addons list                                                                          | functional-742000 | jenkins | v1.31.2 | 25 Sep 23 04:09 PDT | 25 Sep 23 04:09 PDT |
	|         | -o json                                                                                                |                   |         |         |                     |                     |
	| service | functional-742000 service                                                                              | functional-742000 | jenkins | v1.31.2 | 25 Sep 23 04:09 PDT | 25 Sep 23 04:09 PDT |
	|         | hello-node-connect --url                                                                               |                   |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/25 04:08:36
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.21.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0925 04:08:36.185530    3401 out.go:296] Setting OutFile to fd 1 ...
	I0925 04:08:36.185651    3401 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 04:08:36.185653    3401 out.go:309] Setting ErrFile to fd 2...
	I0925 04:08:36.185654    3401 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 04:08:36.185787    3401 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17297-1010/.minikube/bin
	I0925 04:08:36.186815    3401 out.go:303] Setting JSON to false
	I0925 04:08:36.202495    3401 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2291,"bootTime":1695637825,"procs":414,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.6","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0925 04:08:36.202589    3401 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0925 04:08:36.207136    3401 out.go:177] * [functional-742000] minikube v1.31.2 on Darwin 13.6 (arm64)
	I0925 04:08:36.213097    3401 out.go:177]   - MINIKUBE_LOCATION=17297
	I0925 04:08:36.217168    3401 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17297-1010/kubeconfig
	I0925 04:08:36.213155    3401 notify.go:220] Checking for updates...
	I0925 04:08:36.220137    3401 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0925 04:08:36.223148    3401 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 04:08:36.226097    3401 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17297-1010/.minikube
	I0925 04:08:36.228980    3401 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0925 04:08:36.232273    3401 config.go:182] Loaded profile config "functional-742000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0925 04:08:36.232314    3401 driver.go:373] Setting default libvirt URI to qemu:///system
	I0925 04:08:36.237062    3401 out.go:177] * Using the qemu2 driver based on existing profile
	I0925 04:08:36.244100    3401 start.go:298] selected driver: qemu2
	I0925 04:08:36.244105    3401 start.go:902] validating driver "qemu2" against &{Name:functional-742000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.2 ClusterName:functional-742000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0925 04:08:36.244165    3401 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0925 04:08:36.246091    3401 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0925 04:08:36.246111    3401 cni.go:84] Creating CNI manager for ""
	I0925 04:08:36.246118    3401 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0925 04:08:36.246122    3401 start_flags.go:321] config:
	{Name:functional-742000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:functional-742000 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0925 04:08:36.249962    3401 iso.go:125] acquiring lock: {Name:mkf881a60cf9fd1672567914305ff6f7a4f13809 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 04:08:36.257100    3401 out.go:177] * Starting control plane node functional-742000 in cluster functional-742000
	I0925 04:08:36.261107    3401 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0925 04:08:36.261122    3401 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I0925 04:08:36.261130    3401 cache.go:57] Caching tarball of preloaded images
	I0925 04:08:36.261189    3401 preload.go:174] Found /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0925 04:08:36.261192    3401 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I0925 04:08:36.261265    3401 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/functional-742000/config.json ...
	I0925 04:08:36.261561    3401 start.go:365] acquiring machines lock for functional-742000: {Name:mk918d99818e2cf21e5912bc291ed18d4b442ba6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 04:08:36.261585    3401 start.go:369] acquired machines lock for "functional-742000" in 21.083µs
	I0925 04:08:36.261592    3401 start.go:96] Skipping create...Using existing machine configuration
	I0925 04:08:36.261594    3401 fix.go:54] fixHost starting: 
	I0925 04:08:36.262142    3401 fix.go:102] recreateIfNeeded on functional-742000: state=Running err=<nil>
	W0925 04:08:36.262149    3401 fix.go:128] unexpected machine state, will restart: <nil>
	I0925 04:08:36.266971    3401 out.go:177] * Updating the running qemu2 "functional-742000" VM ...
	I0925 04:08:36.275092    3401 machine.go:88] provisioning docker machine ...
	I0925 04:08:36.275102    3401 buildroot.go:166] provisioning hostname "functional-742000"
	I0925 04:08:36.275127    3401 main.go:141] libmachine: Using SSH client type: native
	I0925 04:08:36.275354    3401 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105030760] 0x105032ed0 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0925 04:08:36.275358    3401 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-742000 && echo "functional-742000" | sudo tee /etc/hostname
	I0925 04:08:36.350183    3401 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-742000
	
	I0925 04:08:36.350239    3401 main.go:141] libmachine: Using SSH client type: native
	I0925 04:08:36.350497    3401 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105030760] 0x105032ed0 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0925 04:08:36.350504    3401 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-742000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-742000/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-742000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0925 04:08:36.422422    3401 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0925 04:08:36.422430    3401 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17297-1010/.minikube CaCertPath:/Users/jenkins/minikube-integration/17297-1010/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17297-1010/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17297-1010/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17297-1010/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17297-1010/.minikube}
	I0925 04:08:36.422437    3401 buildroot.go:174] setting up certificates
	I0925 04:08:36.422443    3401 provision.go:83] configureAuth start
	I0925 04:08:36.422446    3401 provision.go:138] copyHostCerts
	I0925 04:08:36.422544    3401 exec_runner.go:144] found /Users/jenkins/minikube-integration/17297-1010/.minikube/ca.pem, removing ...
	I0925 04:08:36.422548    3401 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17297-1010/.minikube/ca.pem
	I0925 04:08:36.422643    3401 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17297-1010/.minikube/ca.pem (1082 bytes)
	I0925 04:08:36.422816    3401 exec_runner.go:144] found /Users/jenkins/minikube-integration/17297-1010/.minikube/cert.pem, removing ...
	I0925 04:08:36.422818    3401 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17297-1010/.minikube/cert.pem
	I0925 04:08:36.422857    3401 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17297-1010/.minikube/cert.pem (1123 bytes)
	I0925 04:08:36.422984    3401 exec_runner.go:144] found /Users/jenkins/minikube-integration/17297-1010/.minikube/key.pem, removing ...
	I0925 04:08:36.422986    3401 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17297-1010/.minikube/key.pem
	I0925 04:08:36.423091    3401 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17297-1010/.minikube/key.pem (1679 bytes)
	I0925 04:08:36.423197    3401 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17297-1010/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17297-1010/.minikube/certs/ca-key.pem org=jenkins.functional-742000 san=[192.168.105.4 192.168.105.4 localhost 127.0.0.1 minikube functional-742000]
	I0925 04:08:36.490273    3401 provision.go:172] copyRemoteCerts
	I0925 04:08:36.490301    3401 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0925 04:08:36.490306    3401 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/functional-742000/id_rsa Username:docker}
	I0925 04:08:36.528770    3401 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0925 04:08:36.536503    3401 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0925 04:08:36.543377    3401 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0925 04:08:36.550512    3401 provision.go:86] duration metric: configureAuth took 128.064916ms
	I0925 04:08:36.550518    3401 buildroot.go:189] setting minikube options for container-runtime
	I0925 04:08:36.550627    3401 config.go:182] Loaded profile config "functional-742000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0925 04:08:36.550670    3401 main.go:141] libmachine: Using SSH client type: native
	I0925 04:08:36.550881    3401 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105030760] 0x105032ed0 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0925 04:08:36.550884    3401 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0925 04:08:36.622681    3401 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0925 04:08:36.622687    3401 buildroot.go:70] root file system type: tmpfs
	I0925 04:08:36.622731    3401 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0925 04:08:36.622785    3401 main.go:141] libmachine: Using SSH client type: native
	I0925 04:08:36.623023    3401 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105030760] 0x105032ed0 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0925 04:08:36.623055    3401 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0925 04:08:36.700029    3401 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0925 04:08:36.700074    3401 main.go:141] libmachine: Using SSH client type: native
	I0925 04:08:36.700294    3401 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105030760] 0x105032ed0 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0925 04:08:36.700301    3401 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0925 04:08:36.774120    3401 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0925 04:08:36.774126    3401 machine.go:91] provisioned docker machine in 499.030375ms
	I0925 04:08:36.774130    3401 start.go:300] post-start starting for "functional-742000" (driver="qemu2")
	I0925 04:08:36.774134    3401 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0925 04:08:36.774178    3401 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0925 04:08:36.774184    3401 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/functional-742000/id_rsa Username:docker}
	I0925 04:08:36.812585    3401 ssh_runner.go:195] Run: cat /etc/os-release
	I0925 04:08:36.814093    3401 info.go:137] Remote host: Buildroot 2021.02.12
	I0925 04:08:36.814098    3401 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17297-1010/.minikube/addons for local assets ...
	I0925 04:08:36.814171    3401 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17297-1010/.minikube/files for local assets ...
	I0925 04:08:36.814278    3401 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17297-1010/.minikube/files/etc/ssl/certs/14692.pem -> 14692.pem in /etc/ssl/certs
	I0925 04:08:36.814375    3401 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17297-1010/.minikube/files/etc/test/nested/copy/1469/hosts -> hosts in /etc/test/nested/copy/1469
	I0925 04:08:36.814407    3401 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1469
	I0925 04:08:36.817070    3401 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1010/.minikube/files/etc/ssl/certs/14692.pem --> /etc/ssl/certs/14692.pem (1708 bytes)
	I0925 04:08:36.823684    3401 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1010/.minikube/files/etc/test/nested/copy/1469/hosts --> /etc/test/nested/copy/1469/hosts (40 bytes)
	I0925 04:08:36.830814    3401 start.go:303] post-start completed in 56.679167ms
	I0925 04:08:36.830819    3401 fix.go:56] fixHost completed within 569.2255ms
	I0925 04:08:36.830855    3401 main.go:141] libmachine: Using SSH client type: native
	I0925 04:08:36.831084    3401 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105030760] 0x105032ed0 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0925 04:08:36.831087    3401 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0925 04:08:36.902248    3401 main.go:141] libmachine: SSH cmd err, output: <nil>: 1695640116.911345543
	
	I0925 04:08:36.902256    3401 fix.go:206] guest clock: 1695640116.911345543
	I0925 04:08:36.902259    3401 fix.go:219] Guest: 2023-09-25 04:08:36.911345543 -0700 PDT Remote: 2023-09-25 04:08:36.83082 -0700 PDT m=+0.664220751 (delta=80.525543ms)
	I0925 04:08:36.902268    3401 fix.go:190] guest clock delta is within tolerance: 80.525543ms
	I0925 04:08:36.902270    3401 start.go:83] releasing machines lock for "functional-742000", held for 640.682292ms
	I0925 04:08:36.902529    3401 ssh_runner.go:195] Run: cat /version.json
	I0925 04:08:36.902531    3401 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0925 04:08:36.902534    3401 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/functional-742000/id_rsa Username:docker}
	I0925 04:08:36.902548    3401 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/functional-742000/id_rsa Username:docker}
	I0925 04:08:36.941150    3401 ssh_runner.go:195] Run: systemctl --version
	I0925 04:08:36.983227    3401 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0925 04:08:36.984898    3401 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0925 04:08:36.984927    3401 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0925 04:08:36.987558    3401 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0925 04:08:36.987562    3401 start.go:469] detecting cgroup driver to use...
	I0925 04:08:36.987631    3401 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0925 04:08:36.993305    3401 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0925 04:08:36.996670    3401 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0925 04:08:37.000200    3401 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0925 04:08:37.000223    3401 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0925 04:08:37.003663    3401 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0925 04:08:37.006999    3401 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0925 04:08:37.010183    3401 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0925 04:08:37.013256    3401 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0925 04:08:37.016575    3401 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0925 04:08:37.020265    3401 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0925 04:08:37.023335    3401 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0925 04:08:37.026237    3401 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0925 04:08:37.104935    3401 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0925 04:08:37.111598    3401 start.go:469] detecting cgroup driver to use...
	I0925 04:08:37.111658    3401 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0925 04:08:37.117628    3401 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0925 04:08:37.123429    3401 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0925 04:08:37.132390    3401 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0925 04:08:37.137424    3401 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0925 04:08:37.141983    3401 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0925 04:08:37.147531    3401 ssh_runner.go:195] Run: which cri-dockerd
	I0925 04:08:37.148781    3401 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0925 04:08:37.152109    3401 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0925 04:08:37.157219    3401 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0925 04:08:37.235331    3401 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0925 04:08:37.313731    3401 docker.go:554] configuring docker to use "cgroupfs" as cgroup driver...
	I0925 04:08:37.313785    3401 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0925 04:08:37.318812    3401 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0925 04:08:37.403006    3401 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0925 04:08:48.659647    3401 ssh_runner.go:235] Completed: sudo systemctl restart docker: (11.256618542s)
	I0925 04:08:48.659714    3401 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0925 04:08:48.724865    3401 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0925 04:08:48.788597    3401 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0925 04:08:48.853889    3401 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0925 04:08:48.913468    3401 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0925 04:08:48.923791    3401 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0925 04:08:49.000238    3401 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0925 04:08:49.025552    3401 start.go:516] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0925 04:08:49.025640    3401 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0925 04:08:49.028350    3401 start.go:537] Will wait 60s for crictl version
	I0925 04:08:49.028400    3401 ssh_runner.go:195] Run: which crictl
	I0925 04:08:49.029729    3401 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0925 04:08:49.046377    3401 start.go:553] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.6
	RuntimeApiVersion:  v1
	I0925 04:08:49.046442    3401 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0925 04:08:49.054027    3401 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0925 04:08:49.065450    3401 out.go:204] * Preparing Kubernetes v1.28.2 on Docker 24.0.6 ...
	I0925 04:08:49.065591    3401 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0925 04:08:49.072369    3401 out.go:177]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I0925 04:08:49.076440    3401 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0925 04:08:49.076491    3401 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0925 04:08:49.082376    3401 docker.go:664] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-742000
	registry.k8s.io/kube-apiserver:v1.28.2
	registry.k8s.io/kube-proxy:v1.28.2
	registry.k8s.io/kube-controller-manager:v1.28.2
	registry.k8s.io/kube-scheduler:v1.28.2
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I0925 04:08:49.082388    3401 docker.go:594] Images already preloaded, skipping extraction
	I0925 04:08:49.082430    3401 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0925 04:08:49.088047    3401 docker.go:664] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-742000
	registry.k8s.io/kube-apiserver:v1.28.2
	registry.k8s.io/kube-scheduler:v1.28.2
	registry.k8s.io/kube-proxy:v1.28.2
	registry.k8s.io/kube-controller-manager:v1.28.2
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I0925 04:08:49.088053    3401 cache_images.go:84] Images are preloaded, skipping loading
	I0925 04:08:49.088097    3401 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0925 04:08:49.098410    3401 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I0925 04:08:49.098430    3401 cni.go:84] Creating CNI manager for ""
	I0925 04:08:49.098435    3401 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0925 04:08:49.098444    3401 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0925 04:08:49.098454    3401 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.4 APIServerPort:8441 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-742000 NodeName:functional-742000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.4"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.4 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOp
ts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0925 04:08:49.098515    3401 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.4
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-742000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.4
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.4"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0925 04:08:49.098543    3401 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=functional-742000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:functional-742000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:}
	I0925 04:08:49.098607    3401 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I0925 04:08:49.103105    3401 binaries.go:44] Found k8s binaries, skipping transfer
	I0925 04:08:49.103144    3401 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0925 04:08:49.107270    3401 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0925 04:08:49.114192    3401 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0925 04:08:49.120907    3401 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1953 bytes)
	I0925 04:08:49.125935    3401 ssh_runner.go:195] Run: grep 192.168.105.4	control-plane.minikube.internal$ /etc/hosts
	I0925 04:08:49.127244    3401 certs.go:56] Setting up /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/functional-742000 for IP: 192.168.105.4
	I0925 04:08:49.127251    3401 certs.go:190] acquiring lock for shared ca certs: {Name:mk095b03680bcdeba6c321a9f458c9fbafa67639 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 04:08:49.127375    3401 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/17297-1010/.minikube/ca.key
	I0925 04:08:49.127410    3401 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/17297-1010/.minikube/proxy-client-ca.key
	I0925 04:08:49.127483    3401 certs.go:315] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/functional-742000/client.key
	I0925 04:08:49.127525    3401 certs.go:315] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/functional-742000/apiserver.key.942c473b
	I0925 04:08:49.127560    3401 certs.go:315] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/functional-742000/proxy-client.key
	I0925 04:08:49.127711    3401 certs.go:437] found cert: /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/Users/jenkins/minikube-integration/17297-1010/.minikube/certs/1469.pem (1338 bytes)
	W0925 04:08:49.127735    3401 certs.go:433] ignoring /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/Users/jenkins/minikube-integration/17297-1010/.minikube/certs/1469_empty.pem, impossibly tiny 0 bytes
	I0925 04:08:49.127742    3401 certs.go:437] found cert: /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/Users/jenkins/minikube-integration/17297-1010/.minikube/certs/ca-key.pem (1675 bytes)
	I0925 04:08:49.127760    3401 certs.go:437] found cert: /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/Users/jenkins/minikube-integration/17297-1010/.minikube/certs/ca.pem (1082 bytes)
	I0925 04:08:49.127781    3401 certs.go:437] found cert: /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/Users/jenkins/minikube-integration/17297-1010/.minikube/certs/cert.pem (1123 bytes)
	I0925 04:08:49.127800    3401 certs.go:437] found cert: /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/Users/jenkins/minikube-integration/17297-1010/.minikube/certs/key.pem (1679 bytes)
	I0925 04:08:49.127838    3401 certs.go:437] found cert: /Users/jenkins/minikube-integration/17297-1010/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/17297-1010/.minikube/files/etc/ssl/certs/14692.pem (1708 bytes)
	I0925 04:08:49.128160    3401 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/functional-742000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0925 04:08:49.134653    3401 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/functional-742000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0925 04:08:49.141799    3401 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/functional-742000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0925 04:08:49.148679    3401 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/functional-742000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0925 04:08:49.155295    3401 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1010/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0925 04:08:49.162357    3401 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1010/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0925 04:08:49.169362    3401 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1010/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0925 04:08:49.175945    3401 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1010/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0925 04:08:49.182472    3401 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1010/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0925 04:08:49.189569    3401 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/1469.pem --> /usr/share/ca-certificates/1469.pem (1338 bytes)
	I0925 04:08:49.196431    3401 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1010/.minikube/files/etc/ssl/certs/14692.pem --> /usr/share/ca-certificates/14692.pem (1708 bytes)
	I0925 04:08:49.203071    3401 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0925 04:08:49.208090    3401 ssh_runner.go:195] Run: openssl version
	I0925 04:08:49.209780    3401 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0925 04:08:49.213267    3401 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0925 04:08:49.214849    3401 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 25 10:34 /usr/share/ca-certificates/minikubeCA.pem
	I0925 04:08:49.214866    3401 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0925 04:08:49.216726    3401 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0925 04:08:49.219283    3401 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1469.pem && ln -fs /usr/share/ca-certificates/1469.pem /etc/ssl/certs/1469.pem"
	I0925 04:08:49.222517    3401 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1469.pem
	I0925 04:08:49.223978    3401 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 25 11:07 /usr/share/ca-certificates/1469.pem
	I0925 04:08:49.223998    3401 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1469.pem
	I0925 04:08:49.225673    3401 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1469.pem /etc/ssl/certs/51391683.0"
	I0925 04:08:49.228615    3401 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14692.pem && ln -fs /usr/share/ca-certificates/14692.pem /etc/ssl/certs/14692.pem"
	I0925 04:08:49.231456    3401 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14692.pem
	I0925 04:08:49.232773    3401 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 25 11:07 /usr/share/ca-certificates/14692.pem
	I0925 04:08:49.232791    3401 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14692.pem
	I0925 04:08:49.234462    3401 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14692.pem /etc/ssl/certs/3ec20f2e.0"
	I0925 04:08:49.237523    3401 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0925 04:08:49.238903    3401 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0925 04:08:49.240652    3401 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0925 04:08:49.242351    3401 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0925 04:08:49.244158    3401 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0925 04:08:49.245869    3401 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0925 04:08:49.247711    3401 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0925 04:08:49.249427    3401 kubeadm.go:404] StartCluster: {Name:functional-742000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.28.2 ClusterName:functional-742000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExp
iration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0925 04:08:49.249498    3401 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0925 04:08:49.255147    3401 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0925 04:08:49.258350    3401 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0925 04:08:49.258362    3401 kubeadm.go:636] restartCluster start
	I0925 04:08:49.258390    3401 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0925 04:08:49.261408    3401 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0925 04:08:49.261685    3401 kubeconfig.go:92] found "functional-742000" server: "https://192.168.105.4:8441"
	I0925 04:08:49.262447    3401 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0925 04:08:49.265577    3401 kubeadm.go:602] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -22,7 +22,7 @@
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.105.4"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    enable-admission-plugins: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     allocate-node-cidrs: "true"
	
	-- /stdout --
	I0925 04:08:49.265580    3401 kubeadm.go:1128] stopping kube-system containers ...
	I0925 04:08:49.265613    3401 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0925 04:08:49.273441    3401 docker.go:463] Stopping containers: [9dfb99f2a578 dbf1933a6211 0b80f8304b6c 9739b7a7e929 32f0afd6c286 6dc0c59d3999 81582e65ba6a cacd00726bcc 8b742965d8b9 6ba65cf0457c 62fa3d0d412b 14a41f3d2151 709c4e9cbe59 2f3eea391e19 ea252358a915 4e0e4289c439 0604eb403801 844267dc989e 03e867f26643 eaba1cff38b6 6bbd10c0b332 11d4c0aab0a2 3a06b5ed5b07 4dfbce73072c]
	I0925 04:08:49.273513    3401 ssh_runner.go:195] Run: docker stop 9dfb99f2a578 dbf1933a6211 0b80f8304b6c 9739b7a7e929 32f0afd6c286 6dc0c59d3999 81582e65ba6a cacd00726bcc 8b742965d8b9 6ba65cf0457c 62fa3d0d412b 14a41f3d2151 709c4e9cbe59 2f3eea391e19 ea252358a915 4e0e4289c439 0604eb403801 844267dc989e 03e867f26643 eaba1cff38b6 6bbd10c0b332 11d4c0aab0a2 3a06b5ed5b07 4dfbce73072c
	I0925 04:08:49.280144    3401 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0925 04:08:49.361125    3401 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0925 04:08:49.365691    3401 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Sep 25 11:07 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5657 Sep 25 11:07 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Sep 25 11:07 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Sep 25 11:07 /etc/kubernetes/scheduler.conf
	
	I0925 04:08:49.365726    3401 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I0925 04:08:49.369239    3401 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I0925 04:08:49.372348    3401 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I0925 04:08:49.375199    3401 kubeadm.go:166] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0925 04:08:49.375226    3401 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0925 04:08:49.378204    3401 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I0925 04:08:49.381352    3401 kubeadm.go:166] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0925 04:08:49.381368    3401 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0925 04:08:49.384483    3401 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0925 04:08:49.387073    3401 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0925 04:08:49.387076    3401 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0925 04:08:49.407972    3401 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0925 04:08:49.996913    3401 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0925 04:08:50.100276    3401 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0925 04:08:50.124734    3401 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0925 04:08:50.150894    3401 api_server.go:52] waiting for apiserver process to appear ...
	I0925 04:08:50.150943    3401 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0925 04:08:50.160202    3401 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0925 04:08:50.666134    3401 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0925 04:08:51.166106    3401 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0925 04:08:51.170359    3401 api_server.go:72] duration metric: took 1.019466291s to wait for apiserver process to appear ...
	I0925 04:08:51.170364    3401 api_server.go:88] waiting for apiserver healthz status ...
	I0925 04:08:51.170379    3401 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0925 04:08:52.731025    3401 api_server.go:279] https://192.168.105.4:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0925 04:08:52.731033    3401 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0925 04:08:52.731038    3401 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0925 04:08:52.761316    3401 api_server.go:279] https://192.168.105.4:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0925 04:08:52.761326    3401 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0925 04:08:53.263023    3401 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0925 04:08:53.266272    3401 api_server.go:279] https://192.168.105.4:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0925 04:08:53.266278    3401 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0925 04:08:53.763428    3401 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0925 04:08:53.766616    3401 api_server.go:279] https://192.168.105.4:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0925 04:08:53.766623    3401 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0925 04:08:54.263389    3401 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0925 04:08:54.267426    3401 api_server.go:279] https://192.168.105.4:8441/healthz returned 200:
	ok
	I0925 04:08:54.272298    3401 api_server.go:141] control plane version: v1.28.2
	I0925 04:08:54.272306    3401 api_server.go:131] duration metric: took 3.101937333s to wait for apiserver health ...
	I0925 04:08:54.272310    3401 cni.go:84] Creating CNI manager for ""
	I0925 04:08:54.272316    3401 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0925 04:08:54.279142    3401 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0925 04:08:54.283098    3401 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0925 04:08:54.287990    3401 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0925 04:08:54.295212    3401 system_pods.go:43] waiting for kube-system pods to appear ...
	I0925 04:08:54.300320    3401 system_pods.go:59] 6 kube-system pods found
	I0925 04:08:54.300332    3401 system_pods.go:61] "coredns-5dd5756b68-g4fc9" [ae52a96e-701a-451b-aad9-cf2c70757dfa] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 04:08:54.300335    3401 system_pods.go:61] "etcd-functional-742000" [5758577b-0bb3-42e7-989c-d7e11978d024] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0925 04:08:54.300339    3401 system_pods.go:61] "kube-apiserver-functional-742000" [1e0d08a0-b1a6-4395-b82b-29eb50714b4f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0925 04:08:54.300341    3401 system_pods.go:61] "kube-controller-manager-functional-742000" [84570bc3-0a11-49ae-8671-cb3b748e5bf3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0925 04:08:54.300343    3401 system_pods.go:61] "kube-proxy-nrkqn" [9255a7db-f2f1-4a06-a00b-60a55ff8168a] Running
	I0925 04:08:54.300346    3401 system_pods.go:61] "kube-scheduler-functional-742000" [170fa218-d497-4c51-921a-3d719f713061] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0925 04:08:54.300349    3401 system_pods.go:74] duration metric: took 5.131708ms to wait for pod list to return data ...
	I0925 04:08:54.300352    3401 node_conditions.go:102] verifying NodePressure condition ...
	I0925 04:08:54.301949    3401 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0925 04:08:54.301955    3401 node_conditions.go:123] node cpu capacity is 2
	I0925 04:08:54.301959    3401 node_conditions.go:105] duration metric: took 1.605375ms to run NodePressure ...
	I0925 04:08:54.301964    3401 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0925 04:08:54.366536    3401 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0925 04:08:54.368702    3401 kubeadm.go:787] kubelet initialised
	I0925 04:08:54.368706    3401 kubeadm.go:788] duration metric: took 2.163125ms waiting for restarted kubelet to initialise ...
	I0925 04:08:54.368711    3401 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0925 04:08:54.371256    3401 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-g4fc9" in "kube-system" namespace to be "Ready" ...
	I0925 04:08:56.380822    3401 pod_ready.go:102] pod "coredns-5dd5756b68-g4fc9" in "kube-system" namespace has status "Ready":"False"
	I0925 04:08:57.880515    3401 pod_ready.go:92] pod "coredns-5dd5756b68-g4fc9" in "kube-system" namespace has status "Ready":"True"
	I0925 04:08:57.880521    3401 pod_ready.go:81] duration metric: took 3.509257292s waiting for pod "coredns-5dd5756b68-g4fc9" in "kube-system" namespace to be "Ready" ...
	I0925 04:08:57.880525    3401 pod_ready.go:78] waiting up to 4m0s for pod "etcd-functional-742000" in "kube-system" namespace to be "Ready" ...
	I0925 04:08:59.889697    3401 pod_ready.go:102] pod "etcd-functional-742000" in "kube-system" namespace has status "Ready":"False"
	I0925 04:09:01.890118    3401 pod_ready.go:102] pod "etcd-functional-742000" in "kube-system" namespace has status "Ready":"False"
	I0925 04:09:04.390197    3401 pod_ready.go:102] pod "etcd-functional-742000" in "kube-system" namespace has status "Ready":"False"
	I0925 04:09:06.889285    3401 pod_ready.go:102] pod "etcd-functional-742000" in "kube-system" namespace has status "Ready":"False"
	I0925 04:09:08.389728    3401 pod_ready.go:92] pod "etcd-functional-742000" in "kube-system" namespace has status "Ready":"True"
	I0925 04:09:08.389734    3401 pod_ready.go:81] duration metric: took 10.509198083s waiting for pod "etcd-functional-742000" in "kube-system" namespace to be "Ready" ...
	I0925 04:09:08.389737    3401 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-functional-742000" in "kube-system" namespace to be "Ready" ...
	I0925 04:09:08.392193    3401 pod_ready.go:92] pod "kube-apiserver-functional-742000" in "kube-system" namespace has status "Ready":"True"
	I0925 04:09:08.392197    3401 pod_ready.go:81] duration metric: took 2.457ms waiting for pod "kube-apiserver-functional-742000" in "kube-system" namespace to be "Ready" ...
	I0925 04:09:08.392201    3401 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-functional-742000" in "kube-system" namespace to be "Ready" ...
	I0925 04:09:08.394686    3401 pod_ready.go:92] pod "kube-controller-manager-functional-742000" in "kube-system" namespace has status "Ready":"True"
	I0925 04:09:08.394689    3401 pod_ready.go:81] duration metric: took 2.485708ms waiting for pod "kube-controller-manager-functional-742000" in "kube-system" namespace to be "Ready" ...
	I0925 04:09:08.394692    3401 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-nrkqn" in "kube-system" namespace to be "Ready" ...
	I0925 04:09:08.396790    3401 pod_ready.go:92] pod "kube-proxy-nrkqn" in "kube-system" namespace has status "Ready":"True"
	I0925 04:09:08.396792    3401 pod_ready.go:81] duration metric: took 2.098333ms waiting for pod "kube-proxy-nrkqn" in "kube-system" namespace to be "Ready" ...
	I0925 04:09:08.396795    3401 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-functional-742000" in "kube-system" namespace to be "Ready" ...
	I0925 04:09:09.295558    3401 pod_ready.go:92] pod "kube-scheduler-functional-742000" in "kube-system" namespace has status "Ready":"True"
	I0925 04:09:09.295568    3401 pod_ready.go:81] duration metric: took 898.765709ms waiting for pod "kube-scheduler-functional-742000" in "kube-system" namespace to be "Ready" ...
	I0925 04:09:09.295571    3401 pod_ready.go:38] duration metric: took 14.926844708s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0925 04:09:09.295581    3401 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0925 04:09:09.299539    3401 ops.go:34] apiserver oom_adj: -16
	I0925 04:09:09.299543    3401 kubeadm.go:640] restartCluster took 20.041162791s
	I0925 04:09:09.299546    3401 kubeadm.go:406] StartCluster complete in 20.050105416s
	I0925 04:09:09.299553    3401 settings.go:142] acquiring lock: {Name:mkb5a0822179f07ef9369c44aa9b64eb9ef74eed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 04:09:09.299641    3401 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/17297-1010/kubeconfig
	I0925 04:09:09.300081    3401 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17297-1010/kubeconfig: {Name:mkaa9d09ca2bf27c1a43efc9acf938adcc68343d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 04:09:09.300476    3401 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0925 04:09:09.300487    3401 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0925 04:09:09.300646    3401 addons.go:69] Setting storage-provisioner=true in profile "functional-742000"
	I0925 04:09:09.300655    3401 addons.go:231] Setting addon storage-provisioner=true in "functional-742000"
	W0925 04:09:09.300658    3401 addons.go:240] addon storage-provisioner should already be in state true
	I0925 04:09:09.300674    3401 addons.go:69] Setting default-storageclass=true in profile "functional-742000"
	I0925 04:09:09.300692    3401 host.go:66] Checking if "functional-742000" exists ...
	I0925 04:09:09.300736    3401 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-742000"
	I0925 04:09:09.300754    3401 config.go:182] Loaded profile config "functional-742000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	W0925 04:09:09.301245    3401 host.go:54] host status for "functional-742000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/functional-742000/monitor: connect: connection refused
	W0925 04:09:09.301254    3401 addons.go:277] "functional-742000" is not running, setting storage-provisioner=true and skipping enablement (err=<nil>)
	I0925 04:09:09.304158    3401 kapi.go:248] "coredns" deployment in "kube-system" namespace and "functional-742000" context rescaled to 1 replicas
	I0925 04:09:09.304168    3401 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 04:09:09.311325    3401 out.go:177] * Verifying Kubernetes components...
	I0925 04:09:09.305880    3401 addons.go:231] Setting addon default-storageclass=true in "functional-742000"
	W0925 04:09:09.311337    3401 addons.go:240] addon default-storageclass should already be in state true
	I0925 04:09:09.311353    3401 host.go:66] Checking if "functional-742000" exists ...
	I0925 04:09:09.315366    3401 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0925 04:09:09.316035    3401 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0925 04:09:09.316039    3401 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0925 04:09:09.316045    3401 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/functional-742000/id_rsa Username:docker}
	I0925 04:09:09.343873    3401 node_ready.go:35] waiting up to 6m0s for node "functional-742000" to be "Ready" ...
	I0925 04:09:09.343890    3401 start.go:896] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0925 04:09:09.361676    3401 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0925 04:09:09.391299    3401 node_ready.go:49] node "functional-742000" has status "Ready":"True"
	I0925 04:09:09.391316    3401 node_ready.go:38] duration metric: took 47.422708ms waiting for node "functional-742000" to be "Ready" ...
	I0925 04:09:09.391320    3401 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0925 04:09:09.598919    3401 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-g4fc9" in "kube-system" namespace to be "Ready" ...
	I0925 04:09:09.608870    3401 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0925 04:09:09.616784    3401 addons.go:502] enable addons completed in 316.298375ms: enabled=[storage-provisioner default-storageclass]
	I0925 04:09:09.990502    3401 pod_ready.go:92] pod "coredns-5dd5756b68-g4fc9" in "kube-system" namespace has status "Ready":"True"
	I0925 04:09:09.990510    3401 pod_ready.go:81] duration metric: took 391.583667ms waiting for pod "coredns-5dd5756b68-g4fc9" in "kube-system" namespace to be "Ready" ...
	I0925 04:09:09.990514    3401 pod_ready.go:78] waiting up to 6m0s for pod "etcd-functional-742000" in "kube-system" namespace to be "Ready" ...
	I0925 04:09:10.391113    3401 pod_ready.go:92] pod "etcd-functional-742000" in "kube-system" namespace has status "Ready":"True"
	I0925 04:09:10.391120    3401 pod_ready.go:81] duration metric: took 400.603042ms waiting for pod "etcd-functional-742000" in "kube-system" namespace to be "Ready" ...
	I0925 04:09:10.391124    3401 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-functional-742000" in "kube-system" namespace to be "Ready" ...
	I0925 04:09:10.790580    3401 pod_ready.go:92] pod "kube-apiserver-functional-742000" in "kube-system" namespace has status "Ready":"True"
	I0925 04:09:10.790584    3401 pod_ready.go:81] duration metric: took 399.457209ms waiting for pod "kube-apiserver-functional-742000" in "kube-system" namespace to be "Ready" ...
	I0925 04:09:10.790588    3401 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-functional-742000" in "kube-system" namespace to be "Ready" ...
	I0925 04:09:11.189043    3401 pod_ready.go:92] pod "kube-controller-manager-functional-742000" in "kube-system" namespace has status "Ready":"True"
	I0925 04:09:11.193067    3401 pod_ready.go:81] duration metric: took 402.474958ms waiting for pod "kube-controller-manager-functional-742000" in "kube-system" namespace to be "Ready" ...
	I0925 04:09:11.193073    3401 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nrkqn" in "kube-system" namespace to be "Ready" ...
	I0925 04:09:11.590527    3401 pod_ready.go:92] pod "kube-proxy-nrkqn" in "kube-system" namespace has status "Ready":"True"
	I0925 04:09:11.590533    3401 pod_ready.go:81] duration metric: took 397.457667ms waiting for pod "kube-proxy-nrkqn" in "kube-system" namespace to be "Ready" ...
	I0925 04:09:11.590537    3401 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-functional-742000" in "kube-system" namespace to be "Ready" ...
	I0925 04:09:11.990238    3401 pod_ready.go:92] pod "kube-scheduler-functional-742000" in "kube-system" namespace has status "Ready":"True"
	I0925 04:09:11.990243    3401 pod_ready.go:81] duration metric: took 399.702625ms waiting for pod "kube-scheduler-functional-742000" in "kube-system" namespace to be "Ready" ...
	I0925 04:09:11.990275    3401 pod_ready.go:38] duration metric: took 2.598920625s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0925 04:09:11.990289    3401 api_server.go:52] waiting for apiserver process to appear ...
	I0925 04:09:11.990413    3401 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0925 04:09:11.994953    3401 api_server.go:72] duration metric: took 2.690774667s to wait for apiserver process to appear ...
	I0925 04:09:11.994958    3401 api_server.go:88] waiting for apiserver healthz status ...
	I0925 04:09:11.994964    3401 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0925 04:09:11.997993    3401 api_server.go:279] https://192.168.105.4:8441/healthz returned 200:
	ok
	I0925 04:09:11.998810    3401 api_server.go:141] control plane version: v1.28.2
	I0925 04:09:11.998814    3401 api_server.go:131] duration metric: took 3.8545ms to wait for apiserver health ...
	I0925 04:09:11.998817    3401 system_pods.go:43] waiting for kube-system pods to appear ...
	I0925 04:09:12.189939    3401 system_pods.go:59] 6 kube-system pods found
	I0925 04:09:12.189945    3401 system_pods.go:61] "coredns-5dd5756b68-g4fc9" [ae52a96e-701a-451b-aad9-cf2c70757dfa] Running
	I0925 04:09:12.189947    3401 system_pods.go:61] "etcd-functional-742000" [5758577b-0bb3-42e7-989c-d7e11978d024] Running
	I0925 04:09:12.189949    3401 system_pods.go:61] "kube-apiserver-functional-742000" [1e0d08a0-b1a6-4395-b82b-29eb50714b4f] Running
	I0925 04:09:12.189951    3401 system_pods.go:61] "kube-controller-manager-functional-742000" [84570bc3-0a11-49ae-8671-cb3b748e5bf3] Running
	I0925 04:09:12.189953    3401 system_pods.go:61] "kube-proxy-nrkqn" [9255a7db-f2f1-4a06-a00b-60a55ff8168a] Running
	I0925 04:09:12.189954    3401 system_pods.go:61] "kube-scheduler-functional-742000" [170fa218-d497-4c51-921a-3d719f713061] Running
	I0925 04:09:12.189956    3401 system_pods.go:74] duration metric: took 191.137708ms to wait for pod list to return data ...
	I0925 04:09:12.189959    3401 default_sa.go:34] waiting for default service account to be created ...
	I0925 04:09:12.390570    3401 default_sa.go:45] found service account: "default"
	I0925 04:09:12.390577    3401 default_sa.go:55] duration metric: took 200.615333ms for default service account to be created ...
	I0925 04:09:12.390580    3401 system_pods.go:116] waiting for k8s-apps to be running ...
	I0925 04:09:12.591888    3401 system_pods.go:86] 6 kube-system pods found
	I0925 04:09:12.591895    3401 system_pods.go:89] "coredns-5dd5756b68-g4fc9" [ae52a96e-701a-451b-aad9-cf2c70757dfa] Running
	I0925 04:09:12.591897    3401 system_pods.go:89] "etcd-functional-742000" [5758577b-0bb3-42e7-989c-d7e11978d024] Running
	I0925 04:09:12.591899    3401 system_pods.go:89] "kube-apiserver-functional-742000" [1e0d08a0-b1a6-4395-b82b-29eb50714b4f] Running
	I0925 04:09:12.591901    3401 system_pods.go:89] "kube-controller-manager-functional-742000" [84570bc3-0a11-49ae-8671-cb3b748e5bf3] Running
	I0925 04:09:12.591903    3401 system_pods.go:89] "kube-proxy-nrkqn" [9255a7db-f2f1-4a06-a00b-60a55ff8168a] Running
	I0925 04:09:12.591904    3401 system_pods.go:89] "kube-scheduler-functional-742000" [170fa218-d497-4c51-921a-3d719f713061] Running
	I0925 04:09:12.591907    3401 system_pods.go:126] duration metric: took 201.324625ms to wait for k8s-apps to be running ...
	I0925 04:09:12.591909    3401 system_svc.go:44] waiting for kubelet service to be running ....
	I0925 04:09:12.591981    3401 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0925 04:09:12.597367    3401 system_svc.go:56] duration metric: took 5.454666ms WaitForService to wait for kubelet.
	I0925 04:09:12.597374    3401 kubeadm.go:581] duration metric: took 3.293195584s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0925 04:09:12.597383    3401 node_conditions.go:102] verifying NodePressure condition ...
	I0925 04:09:12.790655    3401 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0925 04:09:12.790663    3401 node_conditions.go:123] node cpu capacity is 2
	I0925 04:09:12.790668    3401 node_conditions.go:105] duration metric: took 193.282584ms to run NodePressure ...
	I0925 04:09:12.790673    3401 start.go:228] waiting for startup goroutines ...
	I0925 04:09:12.790676    3401 start.go:233] waiting for cluster config update ...
	I0925 04:09:12.790681    3401 start.go:242] writing updated cluster config ...
	I0925 04:09:12.790985    3401 ssh_runner.go:195] Run: rm -f paused
	I0925 04:09:12.819570    3401 start.go:600] kubectl: 1.27.2, cluster: 1.28.2 (minor skew: 1)
	I0925 04:09:12.823914    3401 out.go:177] * Done! kubectl is now configured to use "functional-742000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Mon 2023-09-25 11:07:21 UTC, ends at Mon 2023-09-25 11:10:03 UTC. --
	Sep 25 11:09:33 functional-742000 cri-dockerd[6590]: time="2023-09-25T11:09:33Z" level=info msg="Stop pulling image registry.k8s.io/echoserver-arm:1.8: Status: Downloaded newer image for registry.k8s.io/echoserver-arm:1.8"
	Sep 25 11:09:33 functional-742000 dockerd[6331]: time="2023-09-25T11:09:33.907049706Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 25 11:09:33 functional-742000 dockerd[6331]: time="2023-09-25T11:09:33.907079666Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 25 11:09:33 functional-742000 dockerd[6331]: time="2023-09-25T11:09:33.907085583Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 25 11:09:33 functional-742000 dockerd[6331]: time="2023-09-25T11:09:33.907089750Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 25 11:09:33 functional-742000 dockerd[6331]: time="2023-09-25T11:09:33.940629057Z" level=info msg="shim disconnected" id=291bbd8e9710cad512d75f58a645558234edc182b55de3b112d9b994e7a89493 namespace=moby
	Sep 25 11:09:33 functional-742000 dockerd[6331]: time="2023-09-25T11:09:33.940662267Z" level=warning msg="cleaning up after shim disconnected" id=291bbd8e9710cad512d75f58a645558234edc182b55de3b112d9b994e7a89493 namespace=moby
	Sep 25 11:09:33 functional-742000 dockerd[6331]: time="2023-09-25T11:09:33.940666726Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 25 11:09:33 functional-742000 dockerd[6315]: time="2023-09-25T11:09:33.940767315Z" level=info msg="ignoring event" container=291bbd8e9710cad512d75f58a645558234edc182b55de3b112d9b994e7a89493 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 25 11:09:34 functional-742000 dockerd[6331]: time="2023-09-25T11:09:34.513244843Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 25 11:09:34 functional-742000 dockerd[6331]: time="2023-09-25T11:09:34.513400810Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 25 11:09:34 functional-742000 dockerd[6331]: time="2023-09-25T11:09:34.513548693Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 25 11:09:34 functional-742000 dockerd[6331]: time="2023-09-25T11:09:34.513559111Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 25 11:09:34 functional-742000 dockerd[6331]: time="2023-09-25T11:09:34.546804066Z" level=info msg="shim disconnected" id=8870f22cdb634ac1b7dd7841f8dbb9221b2cc4e540e5fddd8ba5c40f164cb6cf namespace=moby
	Sep 25 11:09:34 functional-742000 dockerd[6331]: time="2023-09-25T11:09:34.546833235Z" level=warning msg="cleaning up after shim disconnected" id=8870f22cdb634ac1b7dd7841f8dbb9221b2cc4e540e5fddd8ba5c40f164cb6cf namespace=moby
	Sep 25 11:09:34 functional-742000 dockerd[6331]: time="2023-09-25T11:09:34.546837443Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 25 11:09:34 functional-742000 dockerd[6315]: time="2023-09-25T11:09:34.546911822Z" level=info msg="ignoring event" container=8870f22cdb634ac1b7dd7841f8dbb9221b2cc4e540e5fddd8ba5c40f164cb6cf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 25 11:09:51 functional-742000 dockerd[6331]: time="2023-09-25T11:09:51.216154027Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 25 11:09:51 functional-742000 dockerd[6331]: time="2023-09-25T11:09:51.216186529Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 25 11:09:51 functional-742000 dockerd[6331]: time="2023-09-25T11:09:51.216208530Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 25 11:09:51 functional-742000 dockerd[6331]: time="2023-09-25T11:09:51.216215155Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 25 11:09:51 functional-742000 dockerd[6315]: time="2023-09-25T11:09:51.260737324Z" level=info msg="ignoring event" container=e7c4b36376bc5f722a31c8c7a99206c1e7c9b548b4f774fe4f7f3d8416b618e6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 25 11:09:51 functional-742000 dockerd[6331]: time="2023-09-25T11:09:51.260815162Z" level=info msg="shim disconnected" id=e7c4b36376bc5f722a31c8c7a99206c1e7c9b548b4f774fe4f7f3d8416b618e6 namespace=moby
	Sep 25 11:09:51 functional-742000 dockerd[6331]: time="2023-09-25T11:09:51.260841288Z" level=warning msg="cleaning up after shim disconnected" id=e7c4b36376bc5f722a31c8c7a99206c1e7c9b548b4f774fe4f7f3d8416b618e6 namespace=moby
	Sep 25 11:09:51 functional-742000 dockerd[6331]: time="2023-09-25T11:09:51.260854747Z" level=info msg="cleaning up dead shim" namespace=moby
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                           CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	e7c4b36376bc5       72565bf5bbedf                                                                   12 seconds ago       Exited              echoserver-arm            2                   8ab282d179c6f       hello-node-connect-7799dfb7c6-zgsq6
	26fcc726e3097       nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70   41 seconds ago       Running             nginx                     0                   bd5d4a16fe64f       nginx-svc
	1cc1b7565e18b       97e04611ad434                                                                   About a minute ago   Running             coredns                   2                   d6ce0e07e5c0a       coredns-5dd5756b68-g4fc9
	245554f90876a       7da62c127fc0f                                                                   About a minute ago   Running             kube-proxy                2                   e825cdef0742e       kube-proxy-nrkqn
	8ecb93fefaf66       64fc40cee3716                                                                   About a minute ago   Running             kube-scheduler            2                   869225a46cf68       kube-scheduler-functional-742000
	11c3b1bd0f8f7       9cdd6470f48c8                                                                   About a minute ago   Running             etcd                      2                   e4d211aa506a2       etcd-functional-742000
	0c045a8d318df       89d57b83c1786                                                                   About a minute ago   Running             kube-controller-manager   2                   660455e27387c       kube-controller-manager-functional-742000
	1256b9394b066       30bb499447fe1                                                                   About a minute ago   Running             kube-apiserver            0                   8786c37d64870       kube-apiserver-functional-742000
	9dfb99f2a5783       64fc40cee3716                                                                   About a minute ago   Exited              kube-scheduler            1                   8b742965d8b9a       kube-scheduler-functional-742000
	dbf1933a62110       97e04611ad434                                                                   About a minute ago   Exited              coredns                   1                   81582e65ba6a8       coredns-5dd5756b68-g4fc9
	0b80f8304b6c1       7da62c127fc0f                                                                   About a minute ago   Exited              kube-proxy                1                   6ba65cf0457c5       kube-proxy-nrkqn
	9739b7a7e929d       89d57b83c1786                                                                   About a minute ago   Exited              kube-controller-manager   1                   62fa3d0d412bf       kube-controller-manager-functional-742000
	6dc0c59d3999c       9cdd6470f48c8                                                                   About a minute ago   Exited              etcd                      1                   cacd00726bcc3       etcd-functional-742000
	
	* 
	* ==> coredns [1cc1b7565e18] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:59504 - 22885 "HINFO IN 5633031943027150203.4434871831535075704. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.011680141s
	[INFO] 10.244.0.1:58362 - 16730 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 4096" NOERROR qr,aa,rd 104 0.000091963s
	[INFO] 10.244.0.1:16973 - 13186 "AAAA IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 146 0.000087213s
	[INFO] 10.244.0.1:1858 - 34203 "A IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 104 0.000027293s
	[INFO] 10.244.0.1:53295 - 32388 "SVCB IN _dns.resolver.arpa. udp 36 false 512" NXDOMAIN qr,rd,ra 116 0.001082352s
	[INFO] 10.244.0.1:64237 - 39599 "AAAA IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 146 0.000096213s
	[INFO] 10.244.0.1:16658 - 12893 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 104 0.000151508s
	
	* 
	* ==> coredns [dbf1933a6211] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:60101 - 7169 "HINFO IN 8027455119369103070.7748337189168415926. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.00451965s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* Name:               functional-742000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-742000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1bf6c3d5317028f348e55ea19d261973a6487d3c
	                    minikube.k8s.io/name=functional-742000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_25T04_07_38_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 25 Sep 2023 11:07:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-742000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 25 Sep 2023 11:10:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 25 Sep 2023 11:09:53 +0000   Mon, 25 Sep 2023 11:07:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 25 Sep 2023 11:09:53 +0000   Mon, 25 Sep 2023 11:07:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 25 Sep 2023 11:09:53 +0000   Mon, 25 Sep 2023 11:07:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 25 Sep 2023 11:09:53 +0000   Mon, 25 Sep 2023 11:07:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.4
	  Hostname:    functional-742000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	System Info:
	  Machine ID:                 4686ddaaff034e7ca3bcf9e0e376da63
	  System UUID:                4686ddaaff034e7ca3bcf9e0e376da63
	  Boot ID:                    fc0d0486-c830-4785-907f-bd10feca9cb4
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.6
	  Kubelet Version:            v1.28.2
	  Kube-Proxy Version:         v1.28.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-connect-7799dfb7c6-zgsq6          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         34s
	  default                     nginx-svc                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         44s
	  kube-system                 coredns-5dd5756b68-g4fc9                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     2m12s
	  kube-system                 etcd-functional-742000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         2m26s
	  kube-system                 kube-apiserver-functional-742000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         70s
	  kube-system                 kube-controller-manager-functional-742000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m26s
	  kube-system                 kube-proxy-nrkqn                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m12s
	  kube-system                 kube-scheduler-functional-742000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 2m11s              kube-proxy       
	  Normal  Starting                 70s                kube-proxy       
	  Normal  Starting                 110s               kube-proxy       
	  Normal  Starting                 2m25s              kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  2m25s              kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m25s              kubelet          Node functional-742000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m25s              kubelet          Node functional-742000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m25s              kubelet          Node functional-742000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                2m22s              kubelet          Node functional-742000 status is now: NodeReady
	  Normal  RegisteredNode           2m13s              node-controller  Node functional-742000 event: Registered Node functional-742000 in Controller
	  Normal  RegisteredNode           98s                node-controller  Node functional-742000 event: Registered Node functional-742000 in Controller
	  Normal  Starting                 73s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  73s (x8 over 73s)  kubelet          Node functional-742000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    73s (x8 over 73s)  kubelet          Node functional-742000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     73s (x7 over 73s)  kubelet          Node functional-742000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  73s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           58s                node-controller  Node functional-742000 event: Registered Node functional-742000 in Controller
	
	* 
	* ==> dmesg <==
	* [  +4.190293] systemd-fstab-generator[3649]: Ignoring "noauto" for root device
	[  +0.125894] systemd-fstab-generator[3680]: Ignoring "noauto" for root device
	[  +0.080692] systemd-fstab-generator[3691]: Ignoring "noauto" for root device
	[  +0.085806] systemd-fstab-generator[3704]: Ignoring "noauto" for root device
	[Sep25 11:08] kauditd_printk_skb: 23 callbacks suppressed
	[  +6.191060] systemd-fstab-generator[4217]: Ignoring "noauto" for root device
	[  +0.064245] systemd-fstab-generator[4228]: Ignoring "noauto" for root device
	[  +0.071200] systemd-fstab-generator[4239]: Ignoring "noauto" for root device
	[  +0.062954] systemd-fstab-generator[4250]: Ignoring "noauto" for root device
	[  +0.097276] systemd-fstab-generator[4327]: Ignoring "noauto" for root device
	[  +4.708107] kauditd_printk_skb: 34 callbacks suppressed
	[ +23.876819] systemd-fstab-generator[5896]: Ignoring "noauto" for root device
	[  +0.130993] systemd-fstab-generator[5930]: Ignoring "noauto" for root device
	[  +0.076762] systemd-fstab-generator[5941]: Ignoring "noauto" for root device
	[  +0.090388] systemd-fstab-generator[5954]: Ignoring "noauto" for root device
	[ +11.337001] systemd-fstab-generator[6477]: Ignoring "noauto" for root device
	[  +0.060592] systemd-fstab-generator[6488]: Ignoring "noauto" for root device
	[  +0.066388] systemd-fstab-generator[6499]: Ignoring "noauto" for root device
	[  +0.060389] systemd-fstab-generator[6510]: Ignoring "noauto" for root device
	[  +0.085868] systemd-fstab-generator[6583]: Ignoring "noauto" for root device
	[  +1.091664] systemd-fstab-generator[6830]: Ignoring "noauto" for root device
	[  +3.563266] kauditd_printk_skb: 29 callbacks suppressed
	[Sep25 11:09] kauditd_printk_skb: 11 callbacks suppressed
	[  +9.990600] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.901389] TCP: eth0: Driver has suspect GRO implementation, TCP performance may be compromised.
	
	* 
	* ==> etcd [11c3b1bd0f8f] <==
	* {"level":"info","ts":"2023-09-25T11:08:51.085839Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-09-25T11:08:51.084681Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-25T11:08:51.087259Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-25T11:08:51.084807Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 switched to configuration voters=(527499358918876438)"}
	{"level":"info","ts":"2023-09-25T11:08:51.084832Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-09-25T11:08:51.084599Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2023-09-25T11:08:51.087328Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-25T11:08:51.087411Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","added-peer-id":"7520ddf439b1d16","added-peer-peer-urls":["https://192.168.105.4:2380"]}
	{"level":"info","ts":"2023-09-25T11:08:51.087444Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-09-25T11:08:51.087592Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-25T11:08:51.087644Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-25T11:08:52.155198Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 3"}
	{"level":"info","ts":"2023-09-25T11:08:52.155357Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 3"}
	{"level":"info","ts":"2023-09-25T11:08:52.155487Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2023-09-25T11:08:52.155536Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 4"}
	{"level":"info","ts":"2023-09-25T11:08:52.155616Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2023-09-25T11:08:52.15565Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 4"}
	{"level":"info","ts":"2023-09-25T11:08:52.155665Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2023-09-25T11:08:52.157595Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-742000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2023-09-25T11:08:52.157641Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-25T11:08:52.157928Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-25T11:08:52.157997Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-09-25T11:08:52.158092Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-25T11:08:52.15905Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-09-25T11:08:52.159326Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	
	* 
	* ==> etcd [6dc0c59d3999] <==
	* {"level":"info","ts":"2023-09-25T11:08:11.111602Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-09-25T11:08:12.404198Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 2"}
	{"level":"info","ts":"2023-09-25T11:08:12.404362Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-09-25T11:08:12.404405Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 2"}
	{"level":"info","ts":"2023-09-25T11:08:12.404433Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 3"}
	{"level":"info","ts":"2023-09-25T11:08:12.404449Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2023-09-25T11:08:12.404481Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 3"}
	{"level":"info","ts":"2023-09-25T11:08:12.40453Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2023-09-25T11:08:12.407532Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-742000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2023-09-25T11:08:12.407615Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-25T11:08:12.407715Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-25T11:08:12.40775Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-09-25T11:08:12.407783Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-25T11:08:12.409965Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-09-25T11:08:12.410097Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2023-09-25T11:08:37.433818Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-09-25T11:08:37.433848Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"functional-742000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	{"level":"warn","ts":"2023-09-25T11:08:37.433893Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-09-25T11:08:37.433907Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-09-25T11:08:37.437511Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-09-25T11:08:37.437561Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2023-09-25T11:08:37.449147Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"7520ddf439b1d16","current-leader-member-id":"7520ddf439b1d16"}
	{"level":"info","ts":"2023-09-25T11:08:37.450434Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-09-25T11:08:37.450466Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-09-25T11:08:37.450471Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"functional-742000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	
	* 
	* ==> kernel <==
	*  11:10:03 up 2 min,  0 users,  load average: 0.84, 0.31, 0.11
	Linux functional-742000 5.10.57 #1 SMP PREEMPT Mon Sep 18 20:10:16 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [1256b9394b06] <==
	* I0925 11:08:52.820378       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0925 11:08:52.820492       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0925 11:08:52.821477       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0925 11:08:52.823206       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0925 11:08:52.823263       1 aggregator.go:166] initial CRD sync complete...
	I0925 11:08:52.823282       1 autoregister_controller.go:141] Starting autoregister controller
	I0925 11:08:52.823307       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0925 11:08:52.823325       1 cache.go:39] Caches are synced for autoregister controller
	I0925 11:08:52.823609       1 shared_informer.go:318] Caches are synced for configmaps
	I0925 11:08:52.823646       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0925 11:08:52.823661       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0925 11:08:52.855640       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0925 11:08:53.722790       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0925 11:08:53.827995       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.105.4]
	I0925 11:08:53.828437       1 controller.go:624] quota admission added evaluator for: endpoints
	I0925 11:08:53.829781       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0925 11:08:54.349367       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0925 11:08:54.352443       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0925 11:08:54.364586       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0925 11:08:54.374133       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0925 11:08:54.376689       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0925 11:09:14.222615       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.108.72.121"}
	I0925 11:09:19.042912       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.102.188.37"}
	I0925 11:09:29.423184       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0925 11:09:29.466922       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.111.12.229"}
	
	* 
	* ==> kube-controller-manager [0c045a8d318d] <==
	* I0925 11:09:05.502900       1 shared_informer.go:318] Caches are synced for daemon sets
	I0925 11:09:05.508046       1 shared_informer.go:318] Caches are synced for service account
	I0925 11:09:05.519558       1 shared_informer.go:318] Caches are synced for namespace
	I0925 11:09:05.543682       1 shared_informer.go:318] Caches are synced for resource quota
	I0925 11:09:05.592679       1 shared_informer.go:318] Caches are synced for resource quota
	I0925 11:09:05.610931       1 shared_informer.go:318] Caches are synced for disruption
	I0925 11:09:05.621021       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I0925 11:09:05.622142       1 shared_informer.go:318] Caches are synced for deployment
	I0925 11:09:05.683333       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I0925 11:09:05.683344       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0925 11:09:05.683381       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I0925 11:09:05.684456       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I0925 11:09:06.002696       1 shared_informer.go:318] Caches are synced for garbage collector
	I0925 11:09:06.002767       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0925 11:09:06.011851       1 shared_informer.go:318] Caches are synced for garbage collector
	I0925 11:09:29.435298       1 event.go:307] "Event occurred" object="default/hello-node-connect" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-node-connect-7799dfb7c6 to 1"
	I0925 11:09:29.451325       1 event.go:307] "Event occurred" object="default/hello-node-connect-7799dfb7c6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-node-connect-7799dfb7c6-zgsq6"
	I0925 11:09:29.455207       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-connect-7799dfb7c6" duration="23.86178ms"
	I0925 11:09:29.462425       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-connect-7799dfb7c6" duration="7.112686ms"
	I0925 11:09:29.462673       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-connect-7799dfb7c6" duration="16.334µs"
	I0925 11:09:29.464214       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-connect-7799dfb7c6" duration="32.627µs"
	I0925 11:09:34.492071       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-connect-7799dfb7c6" duration="37.21µs"
	I0925 11:09:35.499025       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-connect-7799dfb7c6" duration="48.169µs"
	I0925 11:09:36.504999       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-connect-7799dfb7c6" duration="41.71µs"
	I0925 11:09:51.570077       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-connect-7799dfb7c6" duration="39.919µs"
	
	* 
	* ==> kube-controller-manager [9739b7a7e929] <==
	* I0925 11:08:25.421818       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0925 11:08:25.421827       1 taint_manager.go:206] "Starting NoExecuteTaintManager"
	I0925 11:08:25.421836       1 taint_manager.go:211] "Sending events to api server"
	I0925 11:08:25.421933       1 event.go:307] "Event occurred" object="functional-742000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node functional-742000 event: Registered Node functional-742000 in Controller"
	I0925 11:08:25.423570       1 shared_informer.go:318] Caches are synced for PVC protection
	I0925 11:08:25.453135       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0925 11:08:25.453264       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="63.78µs"
	I0925 11:08:25.462427       1 shared_informer.go:318] Caches are synced for node
	I0925 11:08:25.462474       1 range_allocator.go:174] "Sending events to api server"
	I0925 11:08:25.462484       1 range_allocator.go:178] "Starting range CIDR allocator"
	I0925 11:08:25.462486       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I0925 11:08:25.462488       1 shared_informer.go:318] Caches are synced for cidrallocator
	I0925 11:08:25.463473       1 shared_informer.go:318] Caches are synced for expand
	I0925 11:08:25.463527       1 shared_informer.go:318] Caches are synced for endpoint
	I0925 11:08:25.465498       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0925 11:08:25.474366       1 shared_informer.go:318] Caches are synced for deployment
	I0925 11:08:25.522405       1 shared_informer.go:318] Caches are synced for persistent volume
	I0925 11:08:25.605663       1 shared_informer.go:318] Caches are synced for TTL after finished
	I0925 11:08:25.620267       1 shared_informer.go:318] Caches are synced for cronjob
	I0925 11:08:25.631730       1 shared_informer.go:318] Caches are synced for resource quota
	I0925 11:08:25.657791       1 shared_informer.go:318] Caches are synced for job
	I0925 11:08:25.675425       1 shared_informer.go:318] Caches are synced for resource quota
	I0925 11:08:25.998008       1 shared_informer.go:318] Caches are synced for garbage collector
	I0925 11:08:26.072746       1 shared_informer.go:318] Caches are synced for garbage collector
	I0925 11:08:26.072797       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	
	* 
	* ==> kube-proxy [0b80f8304b6c] <==
	* I0925 11:08:11.988007       1 server_others.go:69] "Using iptables proxy"
	I0925 11:08:13.032054       1 node.go:141] Successfully retrieved node IP: 192.168.105.4
	I0925 11:08:13.051082       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0925 11:08:13.051096       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0925 11:08:13.051797       1 server_others.go:152] "Using iptables Proxier"
	I0925 11:08:13.051818       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0925 11:08:13.051882       1 server.go:846] "Version info" version="v1.28.2"
	I0925 11:08:13.051890       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0925 11:08:13.052236       1 config.go:315] "Starting node config controller"
	I0925 11:08:13.052243       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0925 11:08:13.052400       1 config.go:188] "Starting service config controller"
	I0925 11:08:13.052406       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0925 11:08:13.052412       1 config.go:97] "Starting endpoint slice config controller"
	I0925 11:08:13.052414       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0925 11:08:13.152840       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0925 11:08:13.152936       1 shared_informer.go:318] Caches are synced for node config
	I0925 11:08:13.152970       1 shared_informer.go:318] Caches are synced for service config
	
	* 
	* ==> kube-proxy [245554f90876] <==
	* I0925 11:08:53.615822       1 server_others.go:69] "Using iptables proxy"
	I0925 11:08:53.623148       1 node.go:141] Successfully retrieved node IP: 192.168.105.4
	I0925 11:08:53.745010       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0925 11:08:53.745024       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0925 11:08:53.745993       1 server_others.go:152] "Using iptables Proxier"
	I0925 11:08:53.746009       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0925 11:08:53.746080       1 server.go:846] "Version info" version="v1.28.2"
	I0925 11:08:53.746084       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0925 11:08:53.746608       1 config.go:188] "Starting service config controller"
	I0925 11:08:53.746612       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0925 11:08:53.746619       1 config.go:97] "Starting endpoint slice config controller"
	I0925 11:08:53.746620       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0925 11:08:53.746747       1 config.go:315] "Starting node config controller"
	I0925 11:08:53.746749       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0925 11:08:53.847496       1 shared_informer.go:318] Caches are synced for node config
	I0925 11:08:53.847496       1 shared_informer.go:318] Caches are synced for service config
	I0925 11:08:53.847519       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [8ecb93fefaf6] <==
	* I0925 11:08:51.785732       1 serving.go:348] Generated self-signed cert in-memory
	W0925 11:08:52.763331       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0925 11:08:52.763432       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0925 11:08:52.763457       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0925 11:08:52.763473       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0925 11:08:52.787982       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.2"
	I0925 11:08:52.787995       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0925 11:08:52.789339       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0925 11:08:52.789711       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0925 11:08:52.790011       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0925 11:08:52.790018       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0925 11:08:52.890574       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [9dfb99f2a578] <==
	* I0925 11:08:11.669062       1 serving.go:348] Generated self-signed cert in-memory
	W0925 11:08:13.023367       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0925 11:08:13.023385       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0925 11:08:13.023390       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0925 11:08:13.023393       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0925 11:08:13.032951       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.2"
	I0925 11:08:13.033018       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0925 11:08:13.033892       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0925 11:08:13.033977       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0925 11:08:13.033988       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0925 11:08:13.034007       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0925 11:08:13.135191       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0925 11:08:37.426004       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-09-25 11:07:21 UTC, ends at Mon 2023-09-25 11:10:04 UTC. --
	Sep 25 11:09:16 functional-742000 kubelet[6836]: E0925 11:09:16.354210    6836 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"nonexistingimage:latest\\\"\"" pod="default/invalid-svc" podUID="4539b000-d570-43b5-aae1-b79fa3278c43"
	Sep 25 11:09:17 functional-742000 kubelet[6836]: I0925 11:09:17.599053    6836 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vwrc4\" (UniqueName: \"kubernetes.io/projected/4539b000-d570-43b5-aae1-b79fa3278c43-kube-api-access-vwrc4\") pod \"4539b000-d570-43b5-aae1-b79fa3278c43\" (UID: \"4539b000-d570-43b5-aae1-b79fa3278c43\") "
	Sep 25 11:09:17 functional-742000 kubelet[6836]: I0925 11:09:17.601236    6836 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4539b000-d570-43b5-aae1-b79fa3278c43-kube-api-access-vwrc4" (OuterVolumeSpecName: "kube-api-access-vwrc4") pod "4539b000-d570-43b5-aae1-b79fa3278c43" (UID: "4539b000-d570-43b5-aae1-b79fa3278c43"). InnerVolumeSpecName "kube-api-access-vwrc4". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 25 11:09:17 functional-742000 kubelet[6836]: I0925 11:09:17.699507    6836 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-vwrc4\" (UniqueName: \"kubernetes.io/projected/4539b000-d570-43b5-aae1-b79fa3278c43-kube-api-access-vwrc4\") on node \"functional-742000\" DevicePath \"\""
	Sep 25 11:09:19 functional-742000 kubelet[6836]: I0925 11:09:19.039951    6836 topology_manager.go:215] "Topology Admit Handler" podUID="c57bd663-b576-4c45-ae7d-3148aa5df0c1" podNamespace="default" podName="nginx-svc"
	Sep 25 11:09:19 functional-742000 kubelet[6836]: I0925 11:09:19.107189    6836 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g254r\" (UniqueName: \"kubernetes.io/projected/c57bd663-b576-4c45-ae7d-3148aa5df0c1-kube-api-access-g254r\") pod \"nginx-svc\" (UID: \"c57bd663-b576-4c45-ae7d-3148aa5df0c1\") " pod="default/nginx-svc"
	Sep 25 11:09:20 functional-742000 kubelet[6836]: I0925 11:09:20.193002    6836 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="4539b000-d570-43b5-aae1-b79fa3278c43" path="/var/lib/kubelet/pods/4539b000-d570-43b5-aae1-b79fa3278c43/volumes"
	Sep 25 11:09:29 functional-742000 kubelet[6836]: I0925 11:09:29.453735    6836 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-svc" podStartSLOduration=7.612778972 podCreationTimestamp="2023-09-25 11:09:19 +0000 UTC" firstStartedPulling="2023-09-25 11:09:19.493915309 +0000 UTC m=+29.386312729" lastFinishedPulling="2023-09-25 11:09:22.334847972 +0000 UTC m=+32.227245393" observedRunningTime="2023-09-25 11:09:23.436889574 +0000 UTC m=+33.329286994" watchObservedRunningTime="2023-09-25 11:09:29.453711636 +0000 UTC m=+39.346109057"
	Sep 25 11:09:29 functional-742000 kubelet[6836]: I0925 11:09:29.454014    6836 topology_manager.go:215] "Topology Admit Handler" podUID="6ca1a74b-5f4a-43e9-8824-e36a6b269514" podNamespace="default" podName="hello-node-connect-7799dfb7c6-zgsq6"
	Sep 25 11:09:29 functional-742000 kubelet[6836]: I0925 11:09:29.460315    6836 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xqhj\" (UniqueName: \"kubernetes.io/projected/6ca1a74b-5f4a-43e9-8824-e36a6b269514-kube-api-access-8xqhj\") pod \"hello-node-connect-7799dfb7c6-zgsq6\" (UID: \"6ca1a74b-5f4a-43e9-8824-e36a6b269514\") " pod="default/hello-node-connect-7799dfb7c6-zgsq6"
	Sep 25 11:09:34 functional-742000 kubelet[6836]: I0925 11:09:34.486637    6836 scope.go:117] "RemoveContainer" containerID="291bbd8e9710cad512d75f58a645558234edc182b55de3b112d9b994e7a89493"
	Sep 25 11:09:35 functional-742000 kubelet[6836]: I0925 11:09:35.493532    6836 scope.go:117] "RemoveContainer" containerID="291bbd8e9710cad512d75f58a645558234edc182b55de3b112d9b994e7a89493"
	Sep 25 11:09:35 functional-742000 kubelet[6836]: I0925 11:09:35.493712    6836 scope.go:117] "RemoveContainer" containerID="8870f22cdb634ac1b7dd7841f8dbb9221b2cc4e540e5fddd8ba5c40f164cb6cf"
	Sep 25 11:09:35 functional-742000 kubelet[6836]: E0925 11:09:35.493794    6836 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 10s restarting failed container=echoserver-arm pod=hello-node-connect-7799dfb7c6-zgsq6_default(6ca1a74b-5f4a-43e9-8824-e36a6b269514)\"" pod="default/hello-node-connect-7799dfb7c6-zgsq6" podUID="6ca1a74b-5f4a-43e9-8824-e36a6b269514"
	Sep 25 11:09:36 functional-742000 kubelet[6836]: I0925 11:09:36.499759    6836 scope.go:117] "RemoveContainer" containerID="8870f22cdb634ac1b7dd7841f8dbb9221b2cc4e540e5fddd8ba5c40f164cb6cf"
	Sep 25 11:09:36 functional-742000 kubelet[6836]: E0925 11:09:36.500319    6836 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 10s restarting failed container=echoserver-arm pod=hello-node-connect-7799dfb7c6-zgsq6_default(6ca1a74b-5f4a-43e9-8824-e36a6b269514)\"" pod="default/hello-node-connect-7799dfb7c6-zgsq6" podUID="6ca1a74b-5f4a-43e9-8824-e36a6b269514"
	Sep 25 11:09:50 functional-742000 kubelet[6836]: E0925 11:09:50.200918    6836 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 25 11:09:50 functional-742000 kubelet[6836]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 25 11:09:50 functional-742000 kubelet[6836]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 25 11:09:50 functional-742000 kubelet[6836]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 25 11:09:50 functional-742000 kubelet[6836]: I0925 11:09:50.245662    6836 scope.go:117] "RemoveContainer" containerID="32f0afd6c286d7ad561481c8e7fd8ed334771ab1562c375e9088b156ca89b13a"
	Sep 25 11:09:51 functional-742000 kubelet[6836]: I0925 11:09:51.191770    6836 scope.go:117] "RemoveContainer" containerID="8870f22cdb634ac1b7dd7841f8dbb9221b2cc4e540e5fddd8ba5c40f164cb6cf"
	Sep 25 11:09:51 functional-742000 kubelet[6836]: I0925 11:09:51.564350    6836 scope.go:117] "RemoveContainer" containerID="8870f22cdb634ac1b7dd7841f8dbb9221b2cc4e540e5fddd8ba5c40f164cb6cf"
	Sep 25 11:09:51 functional-742000 kubelet[6836]: I0925 11:09:51.564542    6836 scope.go:117] "RemoveContainer" containerID="e7c4b36376bc5f722a31c8c7a99206c1e7c9b548b4f774fe4f7f3d8416b618e6"
	Sep 25 11:09:51 functional-742000 kubelet[6836]: E0925 11:09:51.564644    6836 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-7799dfb7c6-zgsq6_default(6ca1a74b-5f4a-43e9-8824-e36a6b269514)\"" pod="default/hello-node-connect-7799dfb7c6-zgsq6" podUID="6ca1a74b-5f4a-43e9-8824-e36a6b269514"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p functional-742000 -n functional-742000
helpers_test.go:261: (dbg) Run:  kubectl --context functional-742000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (34.85s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (240.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
functional_test_pvc_test.go:44: ***** TestFunctional/parallel/PersistentVolumeClaim: pod "integration-test=storage-provisioner" failed to start within 4m0s: context deadline exceeded ****
functional_test_pvc_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p functional-742000 -n functional-742000
functional_test_pvc_test.go:44: TestFunctional/parallel/PersistentVolumeClaim: showing logs for failed pods as of 2023-09-25 04:13:18.80261 -0700 PDT m=+2398.678755626
functional_test_pvc_test.go:45: failed waiting for storage-provisioner: integration-test=storage-provisioner within 4m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-742000 -n functional-742000
helpers_test.go:244: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p functional-742000 logs -n 25
helpers_test.go:252: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|----------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|    Command     |                           Args                           |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|----------------|----------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| image          | functional-742000 image save                             | functional-742000 | jenkins | v1.31.2 | 25 Sep 23 04:11 PDT | 25 Sep 23 04:11 PDT |
	|                | gcr.io/google-containers/addon-resizer:functional-742000 |                   |         |         |                     |                     |
	|                | /Users/jenkins/workspace/addon-resizer-save.tar          |                   |         |         |                     |                     |
	|                | --alsologtostderr                                        |                   |         |         |                     |                     |
	| image          | functional-742000 image rm                               | functional-742000 | jenkins | v1.31.2 | 25 Sep 23 04:11 PDT | 25 Sep 23 04:11 PDT |
	|                | gcr.io/google-containers/addon-resizer:functional-742000 |                   |         |         |                     |                     |
	|                | --alsologtostderr                                        |                   |         |         |                     |                     |
	| image          | functional-742000 image ls                               | functional-742000 | jenkins | v1.31.2 | 25 Sep 23 04:11 PDT | 25 Sep 23 04:11 PDT |
	| image          | functional-742000 image load                             | functional-742000 | jenkins | v1.31.2 | 25 Sep 23 04:11 PDT | 25 Sep 23 04:11 PDT |
	|                | /Users/jenkins/workspace/addon-resizer-save.tar          |                   |         |         |                     |                     |
	|                | --alsologtostderr                                        |                   |         |         |                     |                     |
	| image          | functional-742000 image ls                               | functional-742000 | jenkins | v1.31.2 | 25 Sep 23 04:11 PDT | 25 Sep 23 04:11 PDT |
	| image          | functional-742000 image save --daemon                    | functional-742000 | jenkins | v1.31.2 | 25 Sep 23 04:11 PDT | 25 Sep 23 04:11 PDT |
	|                | gcr.io/google-containers/addon-resizer:functional-742000 |                   |         |         |                     |                     |
	|                | --alsologtostderr                                        |                   |         |         |                     |                     |
	| ssh            | functional-742000 ssh sudo cat                           | functional-742000 | jenkins | v1.31.2 | 25 Sep 23 04:11 PDT | 25 Sep 23 04:11 PDT |
	|                | /etc/ssl/certs/1469.pem                                  |                   |         |         |                     |                     |
	| ssh            | functional-742000 ssh sudo cat                           | functional-742000 | jenkins | v1.31.2 | 25 Sep 23 04:11 PDT | 25 Sep 23 04:11 PDT |
	|                | /usr/share/ca-certificates/1469.pem                      |                   |         |         |                     |                     |
	| ssh            | functional-742000 ssh sudo cat                           | functional-742000 | jenkins | v1.31.2 | 25 Sep 23 04:11 PDT | 25 Sep 23 04:11 PDT |
	|                | /etc/ssl/certs/51391683.0                                |                   |         |         |                     |                     |
	| ssh            | functional-742000 ssh sudo cat                           | functional-742000 | jenkins | v1.31.2 | 25 Sep 23 04:11 PDT | 25 Sep 23 04:11 PDT |
	|                | /etc/ssl/certs/14692.pem                                 |                   |         |         |                     |                     |
	| ssh            | functional-742000 ssh sudo cat                           | functional-742000 | jenkins | v1.31.2 | 25 Sep 23 04:11 PDT | 25 Sep 23 04:11 PDT |
	|                | /usr/share/ca-certificates/14692.pem                     |                   |         |         |                     |                     |
	| ssh            | functional-742000 ssh sudo cat                           | functional-742000 | jenkins | v1.31.2 | 25 Sep 23 04:11 PDT | 25 Sep 23 04:11 PDT |
	|                | /etc/ssl/certs/3ec20f2e.0                                |                   |         |         |                     |                     |
	| docker-env     | functional-742000 docker-env                             | functional-742000 | jenkins | v1.31.2 | 25 Sep 23 04:11 PDT | 25 Sep 23 04:11 PDT |
	| docker-env     | functional-742000 docker-env                             | functional-742000 | jenkins | v1.31.2 | 25 Sep 23 04:11 PDT | 25 Sep 23 04:11 PDT |
	| ssh            | functional-742000 ssh sudo cat                           | functional-742000 | jenkins | v1.31.2 | 25 Sep 23 04:11 PDT | 25 Sep 23 04:11 PDT |
	|                | /etc/test/nested/copy/1469/hosts                         |                   |         |         |                     |                     |
	| image          | functional-742000                                        | functional-742000 | jenkins | v1.31.2 | 25 Sep 23 04:11 PDT | 25 Sep 23 04:11 PDT |
	|                | image ls --format short                                  |                   |         |         |                     |                     |
	|                | --alsologtostderr                                        |                   |         |         |                     |                     |
	| image          | functional-742000                                        | functional-742000 | jenkins | v1.31.2 | 25 Sep 23 04:11 PDT | 25 Sep 23 04:11 PDT |
	|                | image ls --format yaml                                   |                   |         |         |                     |                     |
	|                | --alsologtostderr                                        |                   |         |         |                     |                     |
	| ssh            | functional-742000 ssh pgrep                              | functional-742000 | jenkins | v1.31.2 | 25 Sep 23 04:11 PDT |                     |
	|                | buildkitd                                                |                   |         |         |                     |                     |
	| image          | functional-742000 image build -t                         | functional-742000 | jenkins | v1.31.2 | 25 Sep 23 04:11 PDT | 25 Sep 23 04:11 PDT |
	|                | localhost/my-image:functional-742000                     |                   |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                         |                   |         |         |                     |                     |
	| image          | functional-742000 image ls                               | functional-742000 | jenkins | v1.31.2 | 25 Sep 23 04:11 PDT | 25 Sep 23 04:11 PDT |
	| image          | functional-742000                                        | functional-742000 | jenkins | v1.31.2 | 25 Sep 23 04:11 PDT | 25 Sep 23 04:11 PDT |
	|                | image ls --format json                                   |                   |         |         |                     |                     |
	|                | --alsologtostderr                                        |                   |         |         |                     |                     |
	| image          | functional-742000                                        | functional-742000 | jenkins | v1.31.2 | 25 Sep 23 04:11 PDT | 25 Sep 23 04:11 PDT |
	|                | image ls --format table                                  |                   |         |         |                     |                     |
	|                | --alsologtostderr                                        |                   |         |         |                     |                     |
	| update-context | functional-742000                                        | functional-742000 | jenkins | v1.31.2 | 25 Sep 23 04:11 PDT | 25 Sep 23 04:11 PDT |
	|                | update-context                                           |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                   |                   |         |         |                     |                     |
	| update-context | functional-742000                                        | functional-742000 | jenkins | v1.31.2 | 25 Sep 23 04:11 PDT | 25 Sep 23 04:11 PDT |
	|                | update-context                                           |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                   |                   |         |         |                     |                     |
	| update-context | functional-742000                                        | functional-742000 | jenkins | v1.31.2 | 25 Sep 23 04:11 PDT | 25 Sep 23 04:11 PDT |
	|                | update-context                                           |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                   |                   |         |         |                     |                     |
	|----------------|----------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/25 04:10:43
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.21.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0925 04:10:43.952196    3675 out.go:296] Setting OutFile to fd 1 ...
	I0925 04:10:43.952329    3675 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 04:10:43.952332    3675 out.go:309] Setting ErrFile to fd 2...
	I0925 04:10:43.952334    3675 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 04:10:43.952471    3675 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17297-1010/.minikube/bin
	I0925 04:10:43.953502    3675 out.go:303] Setting JSON to false
	I0925 04:10:43.969793    3675 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2418,"bootTime":1695637825,"procs":415,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.6","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0925 04:10:43.969891    3675 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0925 04:10:43.974430    3675 out.go:177] * [functional-742000] minikube v1.31.2 on Darwin 13.6 (arm64)
	I0925 04:10:43.981391    3675 out.go:177]   - MINIKUBE_LOCATION=17297
	I0925 04:10:43.985409    3675 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17297-1010/kubeconfig
	I0925 04:10:43.981444    3675 notify.go:220] Checking for updates...
	I0925 04:10:43.991394    3675 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0925 04:10:43.994398    3675 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 04:10:43.997376    3675 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17297-1010/.minikube
	I0925 04:10:44.000339    3675 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0925 04:10:44.003681    3675 config.go:182] Loaded profile config "functional-742000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0925 04:10:44.003918    3675 driver.go:373] Setting default libvirt URI to qemu:///system
	I0925 04:10:44.008347    3675 out.go:177] * Using the qemu2 driver based on existing profile
	I0925 04:10:44.015420    3675 start.go:298] selected driver: qemu2
	I0925 04:10:44.015427    3675 start.go:902] validating driver "qemu2" against &{Name:functional-742000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.2 ClusterName:functional-742000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false E
xtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0925 04:10:44.015487    3675 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0925 04:10:44.017414    3675 cni.go:84] Creating CNI manager for ""
	I0925 04:10:44.017429    3675 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0925 04:10:44.017433    3675 start_flags.go:321] config:
	{Name:functional-742000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:functional-742000 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0925 04:10:44.028380    3675 out.go:177] * dry-run validation complete!
	
	* 
	* ==> Docker <==
	* -- Journal begins at Mon 2023-09-25 11:07:21 UTC, ends at Mon 2023-09-25 11:13:19 UTC. --
	Sep 25 11:11:06 functional-742000 dockerd[6331]: time="2023-09-25T11:11:06.268979884Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 25 11:11:39 functional-742000 dockerd[6331]: time="2023-09-25T11:11:39.225889320Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 25 11:11:39 functional-742000 dockerd[6331]: time="2023-09-25T11:11:39.225917149Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 25 11:11:39 functional-742000 dockerd[6331]: time="2023-09-25T11:11:39.225933564Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 25 11:11:39 functional-742000 dockerd[6331]: time="2023-09-25T11:11:39.225938354Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 25 11:11:39 functional-742000 dockerd[6315]: time="2023-09-25T11:11:39.264551128Z" level=info msg="ignoring event" container=d241113e303e11cd1f8f674935f62f3cd9fbd0ebff2eb1fef1c5318d53b207f8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 25 11:11:39 functional-742000 dockerd[6331]: time="2023-09-25T11:11:39.264770471Z" level=info msg="shim disconnected" id=d241113e303e11cd1f8f674935f62f3cd9fbd0ebff2eb1fef1c5318d53b207f8 namespace=moby
	Sep 25 11:11:39 functional-742000 dockerd[6331]: time="2023-09-25T11:11:39.264800009Z" level=warning msg="cleaning up after shim disconnected" id=d241113e303e11cd1f8f674935f62f3cd9fbd0ebff2eb1fef1c5318d53b207f8 namespace=moby
	Sep 25 11:11:39 functional-742000 dockerd[6331]: time="2023-09-25T11:11:39.264804175Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 25 11:12:38 functional-742000 dockerd[6331]: time="2023-09-25T11:12:38.226987424Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 25 11:12:38 functional-742000 dockerd[6331]: time="2023-09-25T11:12:38.227016422Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 25 11:12:38 functional-742000 dockerd[6331]: time="2023-09-25T11:12:38.227025130Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 25 11:12:38 functional-742000 dockerd[6331]: time="2023-09-25T11:12:38.227031505Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 25 11:12:38 functional-742000 dockerd[6331]: time="2023-09-25T11:12:38.272271716Z" level=info msg="shim disconnected" id=c75409f90d85673ae36b4f8b807162e8d99c008c73d47d1641895a88d2431d53 namespace=moby
	Sep 25 11:12:38 functional-742000 dockerd[6315]: time="2023-09-25T11:12:38.272335172Z" level=info msg="ignoring event" container=c75409f90d85673ae36b4f8b807162e8d99c008c73d47d1641895a88d2431d53 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 25 11:12:38 functional-742000 dockerd[6331]: time="2023-09-25T11:12:38.272654951Z" level=warning msg="cleaning up after shim disconnected" id=c75409f90d85673ae36b4f8b807162e8d99c008c73d47d1641895a88d2431d53 namespace=moby
	Sep 25 11:12:38 functional-742000 dockerd[6331]: time="2023-09-25T11:12:38.272664909Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 25 11:13:13 functional-742000 dockerd[6331]: time="2023-09-25T11:13:13.211619999Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 25 11:13:13 functional-742000 dockerd[6331]: time="2023-09-25T11:13:13.211703415Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 25 11:13:13 functional-742000 dockerd[6331]: time="2023-09-25T11:13:13.211715748Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 25 11:13:13 functional-742000 dockerd[6331]: time="2023-09-25T11:13:13.211722415Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 25 11:13:13 functional-742000 dockerd[6315]: time="2023-09-25T11:13:13.258533249Z" level=info msg="ignoring event" container=17d4a5e3fc9f60889743ee78da100713c0a005820a6932f1c96502eaf43585e3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 25 11:13:13 functional-742000 dockerd[6331]: time="2023-09-25T11:13:13.258793913Z" level=info msg="shim disconnected" id=17d4a5e3fc9f60889743ee78da100713c0a005820a6932f1c96502eaf43585e3 namespace=moby
	Sep 25 11:13:13 functional-742000 dockerd[6331]: time="2023-09-25T11:13:13.258874371Z" level=warning msg="cleaning up after shim disconnected" id=17d4a5e3fc9f60889743ee78da100713c0a005820a6932f1c96502eaf43585e3 namespace=moby
	Sep 25 11:13:13 functional-742000 dockerd[6331]: time="2023-09-25T11:13:13.258878912Z" level=info msg="cleaning up dead shim" namespace=moby
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                  CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	17d4a5e3fc9f6       72565bf5bbedf                                                                                          6 seconds ago       Exited              echoserver-arm              5                   4a2b11ac4d246       hello-node-759d89bdcc-shxqc
	c75409f90d856       72565bf5bbedf                                                                                          41 seconds ago      Exited              echoserver-arm              5                   8ab282d179c6f       hello-node-connect-7799dfb7c6-zgsq6
	4b9d1d71aec4d       kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c   2 minutes ago       Running             dashboard-metrics-scraper   0                   c4d4271d18bea       dashboard-metrics-scraper-7fd5cb4ddc-gxjzk
	89997adafdf8f       kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93         2 minutes ago       Running             kubernetes-dashboard        0                   2bffe18622a68       kubernetes-dashboard-8694d4445c-rx248
	b0106e355ba7b       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e    3 minutes ago       Exited              mount-munger                0                   6c8473fdc2ce9       busybox-mount
	26fcc726e3097       nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70                          3 minutes ago       Running             nginx                       0                   bd5d4a16fe64f       nginx-svc
	1cc1b7565e18b       97e04611ad434                                                                                          4 minutes ago       Running             coredns                     2                   d6ce0e07e5c0a       coredns-5dd5756b68-g4fc9
	245554f90876a       7da62c127fc0f                                                                                          4 minutes ago       Running             kube-proxy                  2                   e825cdef0742e       kube-proxy-nrkqn
	8ecb93fefaf66       64fc40cee3716                                                                                          4 minutes ago       Running             kube-scheduler              2                   869225a46cf68       kube-scheduler-functional-742000
	11c3b1bd0f8f7       9cdd6470f48c8                                                                                          4 minutes ago       Running             etcd                        2                   e4d211aa506a2       etcd-functional-742000
	0c045a8d318df       89d57b83c1786                                                                                          4 minutes ago       Running             kube-controller-manager     2                   660455e27387c       kube-controller-manager-functional-742000
	1256b9394b066       30bb499447fe1                                                                                          4 minutes ago       Running             kube-apiserver              0                   8786c37d64870       kube-apiserver-functional-742000
	9dfb99f2a5783       64fc40cee3716                                                                                          5 minutes ago       Exited              kube-scheduler              1                   8b742965d8b9a       kube-scheduler-functional-742000
	dbf1933a62110       97e04611ad434                                                                                          5 minutes ago       Exited              coredns                     1                   81582e65ba6a8       coredns-5dd5756b68-g4fc9
	0b80f8304b6c1       7da62c127fc0f                                                                                          5 minutes ago       Exited              kube-proxy                  1                   6ba65cf0457c5       kube-proxy-nrkqn
	9739b7a7e929d       89d57b83c1786                                                                                          5 minutes ago       Exited              kube-controller-manager     1                   62fa3d0d412bf       kube-controller-manager-functional-742000
	6dc0c59d3999c       9cdd6470f48c8                                                                                          5 minutes ago       Exited              etcd                        1                   cacd00726bcc3       etcd-functional-742000
	
	* 
	* ==> coredns [1cc1b7565e18] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:59504 - 22885 "HINFO IN 5633031943027150203.4434871831535075704. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.011680141s
	[INFO] 10.244.0.1:58362 - 16730 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 4096" NOERROR qr,aa,rd 104 0.000091963s
	[INFO] 10.244.0.1:16973 - 13186 "AAAA IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 146 0.000087213s
	[INFO] 10.244.0.1:1858 - 34203 "A IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 104 0.000027293s
	[INFO] 10.244.0.1:53295 - 32388 "SVCB IN _dns.resolver.arpa. udp 36 false 512" NXDOMAIN qr,rd,ra 116 0.001082352s
	[INFO] 10.244.0.1:64237 - 39599 "AAAA IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 146 0.000096213s
	[INFO] 10.244.0.1:16658 - 12893 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 104 0.000151508s
	
	* 
	* ==> coredns [dbf1933a6211] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:60101 - 7169 "HINFO IN 8027455119369103070.7748337189168415926. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.00451965s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* Name:               functional-742000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-742000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1bf6c3d5317028f348e55ea19d261973a6487d3c
	                    minikube.k8s.io/name=functional-742000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_25T04_07_38_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 25 Sep 2023 11:07:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-742000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 25 Sep 2023 11:13:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 25 Sep 2023 11:11:25 +0000   Mon, 25 Sep 2023 11:07:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 25 Sep 2023 11:11:25 +0000   Mon, 25 Sep 2023 11:07:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 25 Sep 2023 11:11:25 +0000   Mon, 25 Sep 2023 11:07:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 25 Sep 2023 11:11:25 +0000   Mon, 25 Sep 2023 11:07:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.4
	  Hostname:    functional-742000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	System Info:
	  Machine ID:                 4686ddaaff034e7ca3bcf9e0e376da63
	  System UUID:                4686ddaaff034e7ca3bcf9e0e376da63
	  Boot ID:                    fc0d0486-c830-4785-907f-bd10feca9cb4
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.6
	  Kubelet Version:            v1.28.2
	  Kube-Proxy Version:         v1.28.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-759d89bdcc-shxqc                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m15s
	  default                     hello-node-connect-7799dfb7c6-zgsq6           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m50s
	  default                     nginx-svc                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m
	  kube-system                 coredns-5dd5756b68-g4fc9                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     5m28s
	  kube-system                 etcd-functional-742000                        100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         5m42s
	  kube-system                 kube-apiserver-functional-742000              250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m26s
	  kube-system                 kube-controller-manager-functional-742000     200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m42s
	  kube-system                 kube-proxy-nrkqn                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m28s
	  kube-system                 kube-scheduler-functional-742000              100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m41s
	  kubernetes-dashboard        dashboard-metrics-scraper-7fd5cb4ddc-gxjzk    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m35s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-rx248         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m26s                  kube-proxy       
	  Normal  Starting                 4m25s                  kube-proxy       
	  Normal  Starting                 5m6s                   kube-proxy       
	  Normal  Starting                 5m41s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m41s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m41s                  kubelet          Node functional-742000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m41s                  kubelet          Node functional-742000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m41s                  kubelet          Node functional-742000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                5m38s                  kubelet          Node functional-742000 status is now: NodeReady
	  Normal  RegisteredNode           5m29s                  node-controller  Node functional-742000 event: Registered Node functional-742000 in Controller
	  Normal  RegisteredNode           4m54s                  node-controller  Node functional-742000 event: Registered Node functional-742000 in Controller
	  Normal  Starting                 4m29s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m29s (x8 over 4m29s)  kubelet          Node functional-742000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m29s (x8 over 4m29s)  kubelet          Node functional-742000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m29s (x7 over 4m29s)  kubelet          Node functional-742000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m29s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m14s                  node-controller  Node functional-742000 event: Registered Node functional-742000 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.080692] systemd-fstab-generator[3691]: Ignoring "noauto" for root device
	[  +0.085806] systemd-fstab-generator[3704]: Ignoring "noauto" for root device
	[Sep25 11:08] kauditd_printk_skb: 23 callbacks suppressed
	[  +6.191060] systemd-fstab-generator[4217]: Ignoring "noauto" for root device
	[  +0.064245] systemd-fstab-generator[4228]: Ignoring "noauto" for root device
	[  +0.071200] systemd-fstab-generator[4239]: Ignoring "noauto" for root device
	[  +0.062954] systemd-fstab-generator[4250]: Ignoring "noauto" for root device
	[  +0.097276] systemd-fstab-generator[4327]: Ignoring "noauto" for root device
	[  +4.708107] kauditd_printk_skb: 34 callbacks suppressed
	[ +23.876819] systemd-fstab-generator[5896]: Ignoring "noauto" for root device
	[  +0.130993] systemd-fstab-generator[5930]: Ignoring "noauto" for root device
	[  +0.076762] systemd-fstab-generator[5941]: Ignoring "noauto" for root device
	[  +0.090388] systemd-fstab-generator[5954]: Ignoring "noauto" for root device
	[ +11.337001] systemd-fstab-generator[6477]: Ignoring "noauto" for root device
	[  +0.060592] systemd-fstab-generator[6488]: Ignoring "noauto" for root device
	[  +0.066388] systemd-fstab-generator[6499]: Ignoring "noauto" for root device
	[  +0.060389] systemd-fstab-generator[6510]: Ignoring "noauto" for root device
	[  +0.085868] systemd-fstab-generator[6583]: Ignoring "noauto" for root device
	[  +1.091664] systemd-fstab-generator[6830]: Ignoring "noauto" for root device
	[  +3.563266] kauditd_printk_skb: 29 callbacks suppressed
	[Sep25 11:09] kauditd_printk_skb: 11 callbacks suppressed
	[  +9.990600] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.901389] TCP: eth0: Driver has suspect GRO implementation, TCP performance may be compromised.
	[Sep25 11:10] kauditd_printk_skb: 1 callbacks suppressed
	[ +35.728803] kauditd_printk_skb: 6 callbacks suppressed
	
	* 
	* ==> etcd [11c3b1bd0f8f] <==
	* {"level":"info","ts":"2023-09-25T11:08:51.085839Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-09-25T11:08:51.084681Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-25T11:08:51.087259Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-25T11:08:51.084807Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 switched to configuration voters=(527499358918876438)"}
	{"level":"info","ts":"2023-09-25T11:08:51.084832Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-09-25T11:08:51.084599Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2023-09-25T11:08:51.087328Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-25T11:08:51.087411Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","added-peer-id":"7520ddf439b1d16","added-peer-peer-urls":["https://192.168.105.4:2380"]}
	{"level":"info","ts":"2023-09-25T11:08:51.087444Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-09-25T11:08:51.087592Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-25T11:08:51.087644Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-25T11:08:52.155198Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 3"}
	{"level":"info","ts":"2023-09-25T11:08:52.155357Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 3"}
	{"level":"info","ts":"2023-09-25T11:08:52.155487Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2023-09-25T11:08:52.155536Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 4"}
	{"level":"info","ts":"2023-09-25T11:08:52.155616Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2023-09-25T11:08:52.15565Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 4"}
	{"level":"info","ts":"2023-09-25T11:08:52.155665Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2023-09-25T11:08:52.157595Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-742000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2023-09-25T11:08:52.157641Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-25T11:08:52.157928Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-25T11:08:52.157997Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-09-25T11:08:52.158092Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-25T11:08:52.15905Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-09-25T11:08:52.159326Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	
	* 
	* ==> etcd [6dc0c59d3999] <==
	* {"level":"info","ts":"2023-09-25T11:08:11.111602Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-09-25T11:08:12.404198Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 2"}
	{"level":"info","ts":"2023-09-25T11:08:12.404362Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-09-25T11:08:12.404405Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 2"}
	{"level":"info","ts":"2023-09-25T11:08:12.404433Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 3"}
	{"level":"info","ts":"2023-09-25T11:08:12.404449Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2023-09-25T11:08:12.404481Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 3"}
	{"level":"info","ts":"2023-09-25T11:08:12.40453Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2023-09-25T11:08:12.407532Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-742000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2023-09-25T11:08:12.407615Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-25T11:08:12.407715Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-25T11:08:12.40775Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-09-25T11:08:12.407783Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-25T11:08:12.409965Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-09-25T11:08:12.410097Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2023-09-25T11:08:37.433818Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-09-25T11:08:37.433848Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"functional-742000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	{"level":"warn","ts":"2023-09-25T11:08:37.433893Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-09-25T11:08:37.433907Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-09-25T11:08:37.437511Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-09-25T11:08:37.437561Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2023-09-25T11:08:37.449147Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"7520ddf439b1d16","current-leader-member-id":"7520ddf439b1d16"}
	{"level":"info","ts":"2023-09-25T11:08:37.450434Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-09-25T11:08:37.450466Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-09-25T11:08:37.450471Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"functional-742000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	
	* 
	* ==> kernel <==
	*  11:13:19 up 5 min,  0 users,  load average: 0.21, 0.37, 0.17
	Linux functional-742000 5.10.57 #1 SMP PREEMPT Mon Sep 18 20:10:16 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [1256b9394b06] <==
	* I0925 11:08:52.823263       1 aggregator.go:166] initial CRD sync complete...
	I0925 11:08:52.823282       1 autoregister_controller.go:141] Starting autoregister controller
	I0925 11:08:52.823307       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0925 11:08:52.823325       1 cache.go:39] Caches are synced for autoregister controller
	I0925 11:08:52.823609       1 shared_informer.go:318] Caches are synced for configmaps
	I0925 11:08:52.823646       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0925 11:08:52.823661       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0925 11:08:52.855640       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0925 11:08:53.722790       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0925 11:08:53.827995       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.105.4]
	I0925 11:08:53.828437       1 controller.go:624] quota admission added evaluator for: endpoints
	I0925 11:08:53.829781       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0925 11:08:54.349367       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0925 11:08:54.352443       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0925 11:08:54.364586       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0925 11:08:54.374133       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0925 11:08:54.376689       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0925 11:09:14.222615       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.108.72.121"}
	I0925 11:09:19.042912       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.102.188.37"}
	I0925 11:09:29.423184       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0925 11:09:29.466922       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.111.12.229"}
	I0925 11:10:04.313250       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.103.179.137"}
	I0925 11:10:44.521621       1 controller.go:624] quota admission added evaluator for: namespaces
	I0925 11:10:44.610633       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.113.126"}
	I0925 11:10:44.621080       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.44.158"}
	
	* 
	* ==> kube-controller-manager [0c045a8d318d] <==
	* I0925 11:10:44.595589       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8694d4445c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0925 11:10:44.595599       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-7fd5cb4ddc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0925 11:10:44.622051       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-rx248"
	I0925 11:10:44.634372       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="16.786273ms"
	I0925 11:10:44.641821       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="7.392187ms"
	I0925 11:10:44.642714       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-7fd5cb4ddc-gxjzk"
	I0925 11:10:44.647564       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="10.265716ms"
	I0925 11:10:44.655870       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="8.189397ms"
	I0925 11:10:44.656001       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="16.709µs"
	I0925 11:10:44.656067       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="14.225886ms"
	I0925 11:10:44.656103       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="10.792µs"
	I0925 11:10:44.660405       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="17.251µs"
	I0925 11:10:48.975229       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-759d89bdcc" duration="24.46µs"
	I0925 11:10:49.989008       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="4.531159ms"
	I0925 11:10:49.989035       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="10.918µs"
	I0925 11:10:51.998298       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="2.68131ms"
	I0925 11:10:51.998324       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="13.126µs"
	I0925 11:11:00.197127       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-759d89bdcc" duration="24.751µs"
	I0925 11:11:07.076305       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-connect-7799dfb7c6" duration="38.794µs"
	I0925 11:11:21.196591       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-connect-7799dfb7c6" duration="23.001µs"
	I0925 11:11:40.273074       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-759d89bdcc" duration="32.703µs"
	I0925 11:11:53.196574       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-759d89bdcc" duration="26.455µs"
	I0925 11:12:38.526558       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-connect-7799dfb7c6" duration="36.124µs"
	I0925 11:12:53.196087       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-connect-7799dfb7c6" duration="64.665µs"
	I0925 11:13:13.711510       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-759d89bdcc" duration="54.792µs"
	
	* 
	* ==> kube-controller-manager [9739b7a7e929] <==
	* I0925 11:08:25.421818       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0925 11:08:25.421827       1 taint_manager.go:206] "Starting NoExecuteTaintManager"
	I0925 11:08:25.421836       1 taint_manager.go:211] "Sending events to api server"
	I0925 11:08:25.421933       1 event.go:307] "Event occurred" object="functional-742000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node functional-742000 event: Registered Node functional-742000 in Controller"
	I0925 11:08:25.423570       1 shared_informer.go:318] Caches are synced for PVC protection
	I0925 11:08:25.453135       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0925 11:08:25.453264       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="63.78µs"
	I0925 11:08:25.462427       1 shared_informer.go:318] Caches are synced for node
	I0925 11:08:25.462474       1 range_allocator.go:174] "Sending events to api server"
	I0925 11:08:25.462484       1 range_allocator.go:178] "Starting range CIDR allocator"
	I0925 11:08:25.462486       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I0925 11:08:25.462488       1 shared_informer.go:318] Caches are synced for cidrallocator
	I0925 11:08:25.463473       1 shared_informer.go:318] Caches are synced for expand
	I0925 11:08:25.463527       1 shared_informer.go:318] Caches are synced for endpoint
	I0925 11:08:25.465498       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0925 11:08:25.474366       1 shared_informer.go:318] Caches are synced for deployment
	I0925 11:08:25.522405       1 shared_informer.go:318] Caches are synced for persistent volume
	I0925 11:08:25.605663       1 shared_informer.go:318] Caches are synced for TTL after finished
	I0925 11:08:25.620267       1 shared_informer.go:318] Caches are synced for cronjob
	I0925 11:08:25.631730       1 shared_informer.go:318] Caches are synced for resource quota
	I0925 11:08:25.657791       1 shared_informer.go:318] Caches are synced for job
	I0925 11:08:25.675425       1 shared_informer.go:318] Caches are synced for resource quota
	I0925 11:08:25.998008       1 shared_informer.go:318] Caches are synced for garbage collector
	I0925 11:08:26.072746       1 shared_informer.go:318] Caches are synced for garbage collector
	I0925 11:08:26.072797       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	
	* 
	* ==> kube-proxy [0b80f8304b6c] <==
	* I0925 11:08:11.988007       1 server_others.go:69] "Using iptables proxy"
	I0925 11:08:13.032054       1 node.go:141] Successfully retrieved node IP: 192.168.105.4
	I0925 11:08:13.051082       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0925 11:08:13.051096       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0925 11:08:13.051797       1 server_others.go:152] "Using iptables Proxier"
	I0925 11:08:13.051818       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0925 11:08:13.051882       1 server.go:846] "Version info" version="v1.28.2"
	I0925 11:08:13.051890       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0925 11:08:13.052236       1 config.go:315] "Starting node config controller"
	I0925 11:08:13.052243       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0925 11:08:13.052400       1 config.go:188] "Starting service config controller"
	I0925 11:08:13.052406       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0925 11:08:13.052412       1 config.go:97] "Starting endpoint slice config controller"
	I0925 11:08:13.052414       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0925 11:08:13.152840       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0925 11:08:13.152936       1 shared_informer.go:318] Caches are synced for node config
	I0925 11:08:13.152970       1 shared_informer.go:318] Caches are synced for service config
	
	* 
	* ==> kube-proxy [245554f90876] <==
	* I0925 11:08:53.615822       1 server_others.go:69] "Using iptables proxy"
	I0925 11:08:53.623148       1 node.go:141] Successfully retrieved node IP: 192.168.105.4
	I0925 11:08:53.745010       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0925 11:08:53.745024       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0925 11:08:53.745993       1 server_others.go:152] "Using iptables Proxier"
	I0925 11:08:53.746009       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0925 11:08:53.746080       1 server.go:846] "Version info" version="v1.28.2"
	I0925 11:08:53.746084       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0925 11:08:53.746608       1 config.go:188] "Starting service config controller"
	I0925 11:08:53.746612       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0925 11:08:53.746619       1 config.go:97] "Starting endpoint slice config controller"
	I0925 11:08:53.746620       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0925 11:08:53.746747       1 config.go:315] "Starting node config controller"
	I0925 11:08:53.746749       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0925 11:08:53.847496       1 shared_informer.go:318] Caches are synced for node config
	I0925 11:08:53.847496       1 shared_informer.go:318] Caches are synced for service config
	I0925 11:08:53.847519       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [8ecb93fefaf6] <==
	* I0925 11:08:51.785732       1 serving.go:348] Generated self-signed cert in-memory
	W0925 11:08:52.763331       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0925 11:08:52.763432       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0925 11:08:52.763457       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0925 11:08:52.763473       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0925 11:08:52.787982       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.2"
	I0925 11:08:52.787995       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0925 11:08:52.789339       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0925 11:08:52.789711       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0925 11:08:52.790011       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0925 11:08:52.790018       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0925 11:08:52.890574       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [9dfb99f2a578] <==
	* I0925 11:08:11.669062       1 serving.go:348] Generated self-signed cert in-memory
	W0925 11:08:13.023367       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0925 11:08:13.023385       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0925 11:08:13.023390       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0925 11:08:13.023393       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0925 11:08:13.032951       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.2"
	I0925 11:08:13.033018       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0925 11:08:13.033892       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0925 11:08:13.033977       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0925 11:08:13.033988       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0925 11:08:13.034007       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0925 11:08:13.135191       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0925 11:08:37.426004       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-09-25 11:07:21 UTC, ends at Mon 2023-09-25 11:13:19 UTC. --
	Sep 25 11:12:17 functional-742000 kubelet[6836]: E0925 11:12:17.192170    6836 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=echoserver-arm pod=hello-node-759d89bdcc-shxqc_default(58d6086e-707d-4d21-bfda-49e3adf87469)\"" pod="default/hello-node-759d89bdcc-shxqc" podUID="58d6086e-707d-4d21-bfda-49e3adf87469"
	Sep 25 11:12:25 functional-742000 kubelet[6836]: I0925 11:12:25.191357    6836 scope.go:117] "RemoveContainer" containerID="ce284dc35e39203a5bbfb08ec594f1aa72a9561f14117ad6400e57f38d697ec9"
	Sep 25 11:12:25 functional-742000 kubelet[6836]: E0925 11:12:25.191493    6836 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=echoserver-arm pod=hello-node-connect-7799dfb7c6-zgsq6_default(6ca1a74b-5f4a-43e9-8824-e36a6b269514)\"" pod="default/hello-node-connect-7799dfb7c6-zgsq6" podUID="6ca1a74b-5f4a-43e9-8824-e36a6b269514"
	Sep 25 11:12:31 functional-742000 kubelet[6836]: I0925 11:12:31.191832    6836 scope.go:117] "RemoveContainer" containerID="d241113e303e11cd1f8f674935f62f3cd9fbd0ebff2eb1fef1c5318d53b207f8"
	Sep 25 11:12:31 functional-742000 kubelet[6836]: E0925 11:12:31.191926    6836 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=echoserver-arm pod=hello-node-759d89bdcc-shxqc_default(58d6086e-707d-4d21-bfda-49e3adf87469)\"" pod="default/hello-node-759d89bdcc-shxqc" podUID="58d6086e-707d-4d21-bfda-49e3adf87469"
	Sep 25 11:12:38 functional-742000 kubelet[6836]: I0925 11:12:38.191959    6836 scope.go:117] "RemoveContainer" containerID="ce284dc35e39203a5bbfb08ec594f1aa72a9561f14117ad6400e57f38d697ec9"
	Sep 25 11:12:38 functional-742000 kubelet[6836]: I0925 11:12:38.521105    6836 scope.go:117] "RemoveContainer" containerID="ce284dc35e39203a5bbfb08ec594f1aa72a9561f14117ad6400e57f38d697ec9"
	Sep 25 11:12:38 functional-742000 kubelet[6836]: I0925 11:12:38.521285    6836 scope.go:117] "RemoveContainer" containerID="c75409f90d85673ae36b4f8b807162e8d99c008c73d47d1641895a88d2431d53"
	Sep 25 11:12:38 functional-742000 kubelet[6836]: E0925 11:12:38.521394    6836 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=echoserver-arm pod=hello-node-connect-7799dfb7c6-zgsq6_default(6ca1a74b-5f4a-43e9-8824-e36a6b269514)\"" pod="default/hello-node-connect-7799dfb7c6-zgsq6" podUID="6ca1a74b-5f4a-43e9-8824-e36a6b269514"
	Sep 25 11:12:46 functional-742000 kubelet[6836]: I0925 11:12:46.191467    6836 scope.go:117] "RemoveContainer" containerID="d241113e303e11cd1f8f674935f62f3cd9fbd0ebff2eb1fef1c5318d53b207f8"
	Sep 25 11:12:46 functional-742000 kubelet[6836]: E0925 11:12:46.191858    6836 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=echoserver-arm pod=hello-node-759d89bdcc-shxqc_default(58d6086e-707d-4d21-bfda-49e3adf87469)\"" pod="default/hello-node-759d89bdcc-shxqc" podUID="58d6086e-707d-4d21-bfda-49e3adf87469"
	Sep 25 11:12:50 functional-742000 kubelet[6836]: E0925 11:12:50.201263    6836 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 25 11:12:50 functional-742000 kubelet[6836]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 25 11:12:50 functional-742000 kubelet[6836]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 25 11:12:50 functional-742000 kubelet[6836]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 25 11:12:53 functional-742000 kubelet[6836]: I0925 11:12:53.191524    6836 scope.go:117] "RemoveContainer" containerID="c75409f90d85673ae36b4f8b807162e8d99c008c73d47d1641895a88d2431d53"
	Sep 25 11:12:53 functional-742000 kubelet[6836]: E0925 11:12:53.191813    6836 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=echoserver-arm pod=hello-node-connect-7799dfb7c6-zgsq6_default(6ca1a74b-5f4a-43e9-8824-e36a6b269514)\"" pod="default/hello-node-connect-7799dfb7c6-zgsq6" podUID="6ca1a74b-5f4a-43e9-8824-e36a6b269514"
	Sep 25 11:12:59 functional-742000 kubelet[6836]: I0925 11:12:59.191831    6836 scope.go:117] "RemoveContainer" containerID="d241113e303e11cd1f8f674935f62f3cd9fbd0ebff2eb1fef1c5318d53b207f8"
	Sep 25 11:12:59 functional-742000 kubelet[6836]: E0925 11:12:59.191923    6836 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=echoserver-arm pod=hello-node-759d89bdcc-shxqc_default(58d6086e-707d-4d21-bfda-49e3adf87469)\"" pod="default/hello-node-759d89bdcc-shxqc" podUID="58d6086e-707d-4d21-bfda-49e3adf87469"
	Sep 25 11:13:08 functional-742000 kubelet[6836]: I0925 11:13:08.192026    6836 scope.go:117] "RemoveContainer" containerID="c75409f90d85673ae36b4f8b807162e8d99c008c73d47d1641895a88d2431d53"
	Sep 25 11:13:08 functional-742000 kubelet[6836]: E0925 11:13:08.192104    6836 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=echoserver-arm pod=hello-node-connect-7799dfb7c6-zgsq6_default(6ca1a74b-5f4a-43e9-8824-e36a6b269514)\"" pod="default/hello-node-connect-7799dfb7c6-zgsq6" podUID="6ca1a74b-5f4a-43e9-8824-e36a6b269514"
	Sep 25 11:13:13 functional-742000 kubelet[6836]: I0925 11:13:13.191370    6836 scope.go:117] "RemoveContainer" containerID="d241113e303e11cd1f8f674935f62f3cd9fbd0ebff2eb1fef1c5318d53b207f8"
	Sep 25 11:13:13 functional-742000 kubelet[6836]: I0925 11:13:13.705409    6836 scope.go:117] "RemoveContainer" containerID="d241113e303e11cd1f8f674935f62f3cd9fbd0ebff2eb1fef1c5318d53b207f8"
	Sep 25 11:13:13 functional-742000 kubelet[6836]: I0925 11:13:13.705561    6836 scope.go:117] "RemoveContainer" containerID="17d4a5e3fc9f60889743ee78da100713c0a005820a6932f1c96502eaf43585e3"
	Sep 25 11:13:13 functional-742000 kubelet[6836]: E0925 11:13:13.705657    6836 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=echoserver-arm pod=hello-node-759d89bdcc-shxqc_default(58d6086e-707d-4d21-bfda-49e3adf87469)\"" pod="default/hello-node-759d89bdcc-shxqc" podUID="58d6086e-707d-4d21-bfda-49e3adf87469"
	
	* 
	* ==> kubernetes-dashboard [89997adafdf8] <==
	* 2023/09/25 11:10:49 Using namespace: kubernetes-dashboard
	2023/09/25 11:10:49 Using in-cluster config to connect to apiserver
	2023/09/25 11:10:49 Using secret token for csrf signing
	2023/09/25 11:10:49 Initializing csrf token from kubernetes-dashboard-csrf secret
	2023/09/25 11:10:49 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2023/09/25 11:10:49 Successful initial request to the apiserver, version: v1.28.2
	2023/09/25 11:10:49 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2023/09/25 11:10:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2023/09/25 11:10:49 Generating JWE encryption key
	2023/09/25 11:10:49 Initializing JWE encryption key from synchronized object
	2023/09/25 11:10:49 Creating in-cluster Sidecar client
	2023/09/25 11:10:49 Serving insecurely on HTTP port: 9090
	2023/09/25 11:10:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2023/09/25 11:11:19 Successful request to sidecar
	2023/09/25 11:10:49 Starting overwatch
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p functional-742000 -n functional-742000
helpers_test.go:261: (dbg) Run:  kubectl --context functional-742000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-742000 describe pod busybox-mount
helpers_test.go:282: (dbg) kubectl --context functional-742000 describe pod busybox-mount:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-742000/192.168.105.4
	Start Time:       Mon, 25 Sep 2023 04:10:13 -0700
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.9
	IPs:
	  IP:  10.244.0.9
	Containers:
	  mount-munger:
	    Container ID:  docker://b0106e355ba7bcedee3ba1f00b0c619a2786ea1a6d4f79fc499d07825393a826
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 25 Sep 2023 04:10:15 -0700
	      Finished:     Mon, 25 Sep 2023 04:10:15 -0700
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-sjqn8 (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-sjqn8:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  3m5s  default-scheduler  Successfully assigned default/busybox-mount to functional-742000
	  Normal  Pulling    3m5s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     3m4s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.075s (1.075s including waiting)
	  Normal  Created    3m4s  kubelet            Created container mount-munger
	  Normal  Started    3m4s  kubelet            Started container mount-munger

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (240.97s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.09s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-543000
image_test.go:105: failed to pass build-args with args: "out/minikube-darwin-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-543000" : 
-- stdout --
	Sending build context to Docker daemon  2.048kB
	Step 1/5 : FROM gcr.io/google-containers/alpine-with-bash:1.0
	 ---> 822c13824dc2
	Step 2/5 : ARG ENV_A
	 ---> [Warning] The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
	 ---> Running in 3c8846c93458
	Removing intermediate container 3c8846c93458
	 ---> 389b030f6977
	Step 3/5 : ARG ENV_B
	 ---> [Warning] The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
	 ---> Running in 08d2abf7d558
	Removing intermediate container 08d2abf7d558
	 ---> 04af79c0a289
	Step 4/5 : RUN echo "test-build-arg" $ENV_A $ENV_B
	 ---> [Warning] The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
	 ---> Running in c84876e9f9d1
	exec /bin/sh: exec format error
	

                                                
                                                
-- /stdout --
** stderr ** 
	DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
	            Install the buildx component to build images with BuildKit:
	            https://docs.docker.com/go/buildx/
	
	The command '/bin/sh -c echo "test-build-arg" $ENV_A $ENV_B' returned a non-zero code: 1

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-543000 -n image-543000
helpers_test.go:244: <<< TestImageBuild/serial/BuildWithBuildArg FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestImageBuild/serial/BuildWithBuildArg]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p image-543000 logs -n 25
helpers_test.go:252: TestImageBuild/serial/BuildWithBuildArg logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|----------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|    Command     |                           Args                           |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|----------------|----------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| image          | functional-742000 image ls                               | functional-742000 | jenkins | v1.31.2 | 25 Sep 23 04:11 PDT | 25 Sep 23 04:11 PDT |
	| image          | functional-742000 image save --daemon                    | functional-742000 | jenkins | v1.31.2 | 25 Sep 23 04:11 PDT | 25 Sep 23 04:11 PDT |
	|                | gcr.io/google-containers/addon-resizer:functional-742000 |                   |         |         |                     |                     |
	|                | --alsologtostderr                                        |                   |         |         |                     |                     |
	| ssh            | functional-742000 ssh sudo cat                           | functional-742000 | jenkins | v1.31.2 | 25 Sep 23 04:11 PDT | 25 Sep 23 04:11 PDT |
	|                | /etc/ssl/certs/1469.pem                                  |                   |         |         |                     |                     |
	| ssh            | functional-742000 ssh sudo cat                           | functional-742000 | jenkins | v1.31.2 | 25 Sep 23 04:11 PDT | 25 Sep 23 04:11 PDT |
	|                | /usr/share/ca-certificates/1469.pem                      |                   |         |         |                     |                     |
	| ssh            | functional-742000 ssh sudo cat                           | functional-742000 | jenkins | v1.31.2 | 25 Sep 23 04:11 PDT | 25 Sep 23 04:11 PDT |
	|                | /etc/ssl/certs/51391683.0                                |                   |         |         |                     |                     |
	| ssh            | functional-742000 ssh sudo cat                           | functional-742000 | jenkins | v1.31.2 | 25 Sep 23 04:11 PDT | 25 Sep 23 04:11 PDT |
	|                | /etc/ssl/certs/14692.pem                                 |                   |         |         |                     |                     |
	| ssh            | functional-742000 ssh sudo cat                           | functional-742000 | jenkins | v1.31.2 | 25 Sep 23 04:11 PDT | 25 Sep 23 04:11 PDT |
	|                | /usr/share/ca-certificates/14692.pem                     |                   |         |         |                     |                     |
	| ssh            | functional-742000 ssh sudo cat                           | functional-742000 | jenkins | v1.31.2 | 25 Sep 23 04:11 PDT | 25 Sep 23 04:11 PDT |
	|                | /etc/ssl/certs/3ec20f2e.0                                |                   |         |         |                     |                     |
	| docker-env     | functional-742000 docker-env                             | functional-742000 | jenkins | v1.31.2 | 25 Sep 23 04:11 PDT | 25 Sep 23 04:11 PDT |
	| docker-env     | functional-742000 docker-env                             | functional-742000 | jenkins | v1.31.2 | 25 Sep 23 04:11 PDT | 25 Sep 23 04:11 PDT |
	| ssh            | functional-742000 ssh sudo cat                           | functional-742000 | jenkins | v1.31.2 | 25 Sep 23 04:11 PDT | 25 Sep 23 04:11 PDT |
	|                | /etc/test/nested/copy/1469/hosts                         |                   |         |         |                     |                     |
	| image          | functional-742000                                        | functional-742000 | jenkins | v1.31.2 | 25 Sep 23 04:11 PDT | 25 Sep 23 04:11 PDT |
	|                | image ls --format short                                  |                   |         |         |                     |                     |
	|                | --alsologtostderr                                        |                   |         |         |                     |                     |
	| image          | functional-742000                                        | functional-742000 | jenkins | v1.31.2 | 25 Sep 23 04:11 PDT | 25 Sep 23 04:11 PDT |
	|                | image ls --format yaml                                   |                   |         |         |                     |                     |
	|                | --alsologtostderr                                        |                   |         |         |                     |                     |
	| ssh            | functional-742000 ssh pgrep                              | functional-742000 | jenkins | v1.31.2 | 25 Sep 23 04:11 PDT |                     |
	|                | buildkitd                                                |                   |         |         |                     |                     |
	| image          | functional-742000 image build -t                         | functional-742000 | jenkins | v1.31.2 | 25 Sep 23 04:11 PDT | 25 Sep 23 04:11 PDT |
	|                | localhost/my-image:functional-742000                     |                   |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                         |                   |         |         |                     |                     |
	| image          | functional-742000 image ls                               | functional-742000 | jenkins | v1.31.2 | 25 Sep 23 04:11 PDT | 25 Sep 23 04:11 PDT |
	| image          | functional-742000                                        | functional-742000 | jenkins | v1.31.2 | 25 Sep 23 04:11 PDT | 25 Sep 23 04:11 PDT |
	|                | image ls --format json                                   |                   |         |         |                     |                     |
	|                | --alsologtostderr                                        |                   |         |         |                     |                     |
	| image          | functional-742000                                        | functional-742000 | jenkins | v1.31.2 | 25 Sep 23 04:11 PDT | 25 Sep 23 04:11 PDT |
	|                | image ls --format table                                  |                   |         |         |                     |                     |
	|                | --alsologtostderr                                        |                   |         |         |                     |                     |
	| update-context | functional-742000                                        | functional-742000 | jenkins | v1.31.2 | 25 Sep 23 04:11 PDT | 25 Sep 23 04:11 PDT |
	|                | update-context                                           |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                   |                   |         |         |                     |                     |
	| update-context | functional-742000                                        | functional-742000 | jenkins | v1.31.2 | 25 Sep 23 04:11 PDT | 25 Sep 23 04:11 PDT |
	|                | update-context                                           |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                   |                   |         |         |                     |                     |
	| update-context | functional-742000                                        | functional-742000 | jenkins | v1.31.2 | 25 Sep 23 04:11 PDT | 25 Sep 23 04:11 PDT |
	|                | update-context                                           |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                   |                   |         |         |                     |                     |
	| delete         | -p functional-742000                                     | functional-742000 | jenkins | v1.31.2 | 25 Sep 23 04:13 PDT | 25 Sep 23 04:13 PDT |
	| start          | -p image-543000 --driver=qemu2                           | image-543000      | jenkins | v1.31.2 | 25 Sep 23 04:13 PDT | 25 Sep 23 04:13 PDT |
	|                |                                                          |                   |         |         |                     |                     |
	| image          | build -t aaa:latest                                      | image-543000      | jenkins | v1.31.2 | 25 Sep 23 04:13 PDT | 25 Sep 23 04:13 PDT |
	|                | ./testdata/image-build/test-normal                       |                   |         |         |                     |                     |
	|                | -p image-543000                                          |                   |         |         |                     |                     |
	| image          | build -t aaa:latest                                      | image-543000      | jenkins | v1.31.2 | 25 Sep 23 04:13 PDT | 25 Sep 23 04:13 PDT |
	|                | --build-opt=build-arg=ENV_A=test_env_str                 |                   |         |         |                     |                     |
	|                | --build-opt=no-cache                                     |                   |         |         |                     |                     |
	|                | ./testdata/image-build/test-arg -p                       |                   |         |         |                     |                     |
	|                | image-543000                                             |                   |         |         |                     |                     |
	|----------------|----------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/25 04:13:20
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.21.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0925 04:13:20.019102    3913 out.go:296] Setting OutFile to fd 1 ...
	I0925 04:13:20.019204    3913 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 04:13:20.019206    3913 out.go:309] Setting ErrFile to fd 2...
	I0925 04:13:20.019208    3913 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 04:13:20.019330    3913 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17297-1010/.minikube/bin
	I0925 04:13:20.020387    3913 out.go:303] Setting JSON to false
	I0925 04:13:20.036494    3913 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2575,"bootTime":1695637825,"procs":413,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.6","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0925 04:13:20.036563    3913 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0925 04:13:20.039106    3913 out.go:177] * [image-543000] minikube v1.31.2 on Darwin 13.6 (arm64)
	I0925 04:13:20.046068    3913 out.go:177]   - MINIKUBE_LOCATION=17297
	I0925 04:13:20.046115    3913 notify.go:220] Checking for updates...
	I0925 04:13:20.053950    3913 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17297-1010/kubeconfig
	I0925 04:13:20.057126    3913 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0925 04:13:20.060186    3913 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 04:13:20.063899    3913 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17297-1010/.minikube
	I0925 04:13:20.066180    3913 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0925 04:13:20.069322    3913 driver.go:373] Setting default libvirt URI to qemu:///system
	I0925 04:13:20.072923    3913 out.go:177] * Using the qemu2 driver based on user configuration
	I0925 04:13:20.080070    3913 start.go:298] selected driver: qemu2
	I0925 04:13:20.080074    3913 start.go:902] validating driver "qemu2" against <nil>
	I0925 04:13:20.080080    3913 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0925 04:13:20.080157    3913 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0925 04:13:20.083087    3913 out.go:177] * Automatically selected the socket_vmnet network
	I0925 04:13:20.088485    3913 start_flags.go:384] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0925 04:13:20.088575    3913 start_flags.go:904] Wait components to verify : map[apiserver:true system_pods:true]
	I0925 04:13:20.088587    3913 cni.go:84] Creating CNI manager for ""
	I0925 04:13:20.088597    3913 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0925 04:13:20.088601    3913 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0925 04:13:20.088608    3913 start_flags.go:321] config:
	{Name:image-543000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:image-543000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0925 04:13:20.093099    3913 iso.go:125] acquiring lock: {Name:mkf881a60cf9fd1672567914305ff6f7a4f13809 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 04:13:20.099060    3913 out.go:177] * Starting control plane node image-543000 in cluster image-543000
	I0925 04:13:20.103101    3913 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0925 04:13:20.103118    3913 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I0925 04:13:20.103132    3913 cache.go:57] Caching tarball of preloaded images
	I0925 04:13:20.103195    3913 preload.go:174] Found /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0925 04:13:20.103198    3913 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I0925 04:13:20.103429    3913 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/image-543000/config.json ...
	I0925 04:13:20.103441    3913 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/image-543000/config.json: {Name:mk931c9fec03130a79e4fdc881feb2ca35847d07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 04:13:20.103621    3913 start.go:365] acquiring machines lock for image-543000: {Name:mk918d99818e2cf21e5912bc291ed18d4b442ba6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 04:13:20.103649    3913 start.go:369] acquired machines lock for "image-543000" in 23.875µs
	I0925 04:13:20.103656    3913 start.go:93] Provisioning new machine with config: &{Name:image-543000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.2 ClusterName:image-543000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 04:13:20.103693    3913 start.go:125] createHost starting for "" (driver="qemu2")
	I0925 04:13:20.112058    3913 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0925 04:13:20.133470    3913 start.go:159] libmachine.API.Create for "image-543000" (driver="qemu2")
	I0925 04:13:20.133497    3913 client.go:168] LocalClient.Create starting
	I0925 04:13:20.133553    3913 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/ca.pem
	I0925 04:13:20.133580    3913 main.go:141] libmachine: Decoding PEM data...
	I0925 04:13:20.133587    3913 main.go:141] libmachine: Parsing certificate...
	I0925 04:13:20.133631    3913 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/cert.pem
	I0925 04:13:20.133648    3913 main.go:141] libmachine: Decoding PEM data...
	I0925 04:13:20.133653    3913 main.go:141] libmachine: Parsing certificate...
	I0925 04:13:20.133929    3913 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17297-1010/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I0925 04:13:20.438335    3913 main.go:141] libmachine: Creating SSH key...
	I0925 04:13:20.490912    3913 main.go:141] libmachine: Creating Disk image...
	I0925 04:13:20.490915    3913 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0925 04:13:20.491051    3913 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/image-543000/disk.qcow2.raw /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/image-543000/disk.qcow2
	I0925 04:13:20.508799    3913 main.go:141] libmachine: STDOUT: 
	I0925 04:13:20.508812    3913 main.go:141] libmachine: STDERR: 
	I0925 04:13:20.508865    3913 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/image-543000/disk.qcow2 +20000M
	I0925 04:13:20.516526    3913 main.go:141] libmachine: STDOUT: Image resized.
	
	I0925 04:13:20.516542    3913 main.go:141] libmachine: STDERR: 
	I0925 04:13:20.516553    3913 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/image-543000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/image-543000/disk.qcow2
	I0925 04:13:20.516560    3913 main.go:141] libmachine: Starting QEMU VM...
	I0925 04:13:20.516590    3913 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/image-543000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/image-543000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/image-543000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:cd:eb:d5:3a:6a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/image-543000/disk.qcow2
	I0925 04:13:20.559164    3913 main.go:141] libmachine: STDOUT: 
	I0925 04:13:20.559181    3913 main.go:141] libmachine: STDERR: 
	I0925 04:13:20.559184    3913 main.go:141] libmachine: Attempt 0
	I0925 04:13:20.559201    3913 main.go:141] libmachine: Searching for c2:cd:eb:d5:3a:6a in /var/db/dhcpd_leases ...
	I0925 04:13:20.559268    3913 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0925 04:13:20.559287    3913 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:7a:1a:96:ca:e:57 ID:1,7a:1a:96:ca:e:57 Lease:0x6512bb69}
	I0925 04:13:20.559291    3913 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:46:8b:64:9b:3d:9c ID:1,46:8b:64:9b:3d:9c Lease:0x651169dd}
	I0925 04:13:20.559295    3913 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:4e:70:b3:50:3d:bc ID:1,4e:70:b3:50:3d:bc Lease:0x651169b8}
	I0925 04:13:22.561464    3913 main.go:141] libmachine: Attempt 1
	I0925 04:13:22.561510    3913 main.go:141] libmachine: Searching for c2:cd:eb:d5:3a:6a in /var/db/dhcpd_leases ...
	I0925 04:13:22.561923    3913 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0925 04:13:22.561965    3913 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:7a:1a:96:ca:e:57 ID:1,7a:1a:96:ca:e:57 Lease:0x6512bb69}
	I0925 04:13:22.561990    3913 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:46:8b:64:9b:3d:9c ID:1,46:8b:64:9b:3d:9c Lease:0x651169dd}
	I0925 04:13:22.562043    3913 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:4e:70:b3:50:3d:bc ID:1,4e:70:b3:50:3d:bc Lease:0x651169b8}
	I0925 04:13:24.564265    3913 main.go:141] libmachine: Attempt 2
	I0925 04:13:24.564279    3913 main.go:141] libmachine: Searching for c2:cd:eb:d5:3a:6a in /var/db/dhcpd_leases ...
	I0925 04:13:24.564398    3913 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0925 04:13:24.564409    3913 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:7a:1a:96:ca:e:57 ID:1,7a:1a:96:ca:e:57 Lease:0x6512bb69}
	I0925 04:13:24.564413    3913 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:46:8b:64:9b:3d:9c ID:1,46:8b:64:9b:3d:9c Lease:0x651169dd}
	I0925 04:13:24.564418    3913 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:4e:70:b3:50:3d:bc ID:1,4e:70:b3:50:3d:bc Lease:0x651169b8}
	I0925 04:13:26.566492    3913 main.go:141] libmachine: Attempt 3
	I0925 04:13:26.566504    3913 main.go:141] libmachine: Searching for c2:cd:eb:d5:3a:6a in /var/db/dhcpd_leases ...
	I0925 04:13:26.566555    3913 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0925 04:13:26.566560    3913 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:7a:1a:96:ca:e:57 ID:1,7a:1a:96:ca:e:57 Lease:0x6512bb69}
	I0925 04:13:26.566619    3913 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:46:8b:64:9b:3d:9c ID:1,46:8b:64:9b:3d:9c Lease:0x651169dd}
	I0925 04:13:26.566623    3913 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:4e:70:b3:50:3d:bc ID:1,4e:70:b3:50:3d:bc Lease:0x651169b8}
	I0925 04:13:28.568654    3913 main.go:141] libmachine: Attempt 4
	I0925 04:13:28.568657    3913 main.go:141] libmachine: Searching for c2:cd:eb:d5:3a:6a in /var/db/dhcpd_leases ...
	I0925 04:13:28.568688    3913 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0925 04:13:28.568693    3913 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:7a:1a:96:ca:e:57 ID:1,7a:1a:96:ca:e:57 Lease:0x6512bb69}
	I0925 04:13:28.568697    3913 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:46:8b:64:9b:3d:9c ID:1,46:8b:64:9b:3d:9c Lease:0x651169dd}
	I0925 04:13:28.568701    3913 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:4e:70:b3:50:3d:bc ID:1,4e:70:b3:50:3d:bc Lease:0x651169b8}
	I0925 04:13:30.570424    3913 main.go:141] libmachine: Attempt 5
	I0925 04:13:30.570433    3913 main.go:141] libmachine: Searching for c2:cd:eb:d5:3a:6a in /var/db/dhcpd_leases ...
	I0925 04:13:30.570514    3913 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0925 04:13:30.570522    3913 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:7a:1a:96:ca:e:57 ID:1,7a:1a:96:ca:e:57 Lease:0x6512bb69}
	I0925 04:13:30.570526    3913 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:46:8b:64:9b:3d:9c ID:1,46:8b:64:9b:3d:9c Lease:0x651169dd}
	I0925 04:13:30.570530    3913 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:4e:70:b3:50:3d:bc ID:1,4e:70:b3:50:3d:bc Lease:0x651169b8}
	I0925 04:13:32.572636    3913 main.go:141] libmachine: Attempt 6
	I0925 04:13:32.572654    3913 main.go:141] libmachine: Searching for c2:cd:eb:d5:3a:6a in /var/db/dhcpd_leases ...
	I0925 04:13:32.572833    3913 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0925 04:13:32.572848    3913 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:c2:cd:eb:d5:3a:6a ID:1,c2:cd:eb:d5:3a:6a Lease:0x6512bcdb}
	I0925 04:13:32.572852    3913 main.go:141] libmachine: Found match: c2:cd:eb:d5:3a:6a
	I0925 04:13:32.572868    3913 main.go:141] libmachine: IP: 192.168.105.5
	I0925 04:13:32.572875    3913 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.5)...
	I0925 04:13:33.577436    3913 machine.go:88] provisioning docker machine ...
	I0925 04:13:33.577452    3913 buildroot.go:166] provisioning hostname "image-543000"
	I0925 04:13:33.577499    3913 main.go:141] libmachine: Using SSH client type: native
	I0925 04:13:33.577783    3913 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100c80760] 0x100c82ed0 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0925 04:13:33.577787    3913 main.go:141] libmachine: About to run SSH command:
	sudo hostname image-543000 && echo "image-543000" | sudo tee /etc/hostname
	I0925 04:13:33.604430    3913 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0925 04:13:36.725371    3913 main.go:141] libmachine: SSH cmd err, output: <nil>: image-543000
	
	I0925 04:13:36.725528    3913 main.go:141] libmachine: Using SSH client type: native
	I0925 04:13:36.726134    3913 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100c80760] 0x100c82ed0 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0925 04:13:36.726148    3913 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\simage-543000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 image-543000/g' /etc/hosts;
				else 
					echo '127.0.1.1 image-543000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0925 04:13:36.817114    3913 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0925 04:13:36.817127    3913 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17297-1010/.minikube CaCertPath:/Users/jenkins/minikube-integration/17297-1010/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17297-1010/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17297-1010/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17297-1010/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17297-1010/.minikube}
	I0925 04:13:36.817137    3913 buildroot.go:174] setting up certificates
	I0925 04:13:36.817147    3913 provision.go:83] configureAuth start
	I0925 04:13:36.817152    3913 provision.go:138] copyHostCerts
	I0925 04:13:36.817271    3913 exec_runner.go:144] found /Users/jenkins/minikube-integration/17297-1010/.minikube/cert.pem, removing ...
	I0925 04:13:36.817278    3913 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17297-1010/.minikube/cert.pem
	I0925 04:13:36.817443    3913 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17297-1010/.minikube/cert.pem (1123 bytes)
	I0925 04:13:36.817721    3913 exec_runner.go:144] found /Users/jenkins/minikube-integration/17297-1010/.minikube/key.pem, removing ...
	I0925 04:13:36.817724    3913 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17297-1010/.minikube/key.pem
	I0925 04:13:36.817794    3913 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17297-1010/.minikube/key.pem (1679 bytes)
	I0925 04:13:36.817946    3913 exec_runner.go:144] found /Users/jenkins/minikube-integration/17297-1010/.minikube/ca.pem, removing ...
	I0925 04:13:36.817948    3913 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17297-1010/.minikube/ca.pem
	I0925 04:13:36.818015    3913 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17297-1010/.minikube/ca.pem (1082 bytes)
	I0925 04:13:36.818128    3913 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17297-1010/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17297-1010/.minikube/certs/ca-key.pem org=jenkins.image-543000 san=[192.168.105.5 192.168.105.5 localhost 127.0.0.1 minikube image-543000]
	I0925 04:13:36.863367    3913 provision.go:172] copyRemoteCerts
	I0925 04:13:36.863396    3913 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0925 04:13:36.863401    3913 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/image-543000/id_rsa Username:docker}
	I0925 04:13:36.901928    3913 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0925 04:13:36.908546    3913 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0925 04:13:36.915931    3913 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0925 04:13:36.923417    3913 provision.go:86] duration metric: configureAuth took 106.265458ms
	I0925 04:13:36.923422    3913 buildroot.go:189] setting minikube options for container-runtime
	I0925 04:13:36.923516    3913 config.go:182] Loaded profile config "image-543000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0925 04:13:36.923552    3913 main.go:141] libmachine: Using SSH client type: native
	I0925 04:13:36.923764    3913 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100c80760] 0x100c82ed0 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0925 04:13:36.923768    3913 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0925 04:13:36.995979    3913 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0925 04:13:36.995983    3913 buildroot.go:70] root file system type: tmpfs
	I0925 04:13:36.996045    3913 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0925 04:13:36.996092    3913 main.go:141] libmachine: Using SSH client type: native
	I0925 04:13:36.996345    3913 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100c80760] 0x100c82ed0 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0925 04:13:36.996380    3913 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0925 04:13:37.074221    3913 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0925 04:13:37.074267    3913 main.go:141] libmachine: Using SSH client type: native
	I0925 04:13:37.074506    3913 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100c80760] 0x100c82ed0 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0925 04:13:37.074513    3913 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0925 04:13:37.459646    3913 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0925 04:13:37.459655    3913 machine.go:91] provisioned docker machine in 3.882208042s
	I0925 04:13:37.459659    3913 client.go:171] LocalClient.Create took 17.326146s
	I0925 04:13:37.459666    3913 start.go:167] duration metric: libmachine.API.Create for "image-543000" took 17.326187291s
	I0925 04:13:37.459669    3913 start.go:300] post-start starting for "image-543000" (driver="qemu2")
	I0925 04:13:37.459673    3913 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0925 04:13:37.459748    3913 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0925 04:13:37.459755    3913 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/image-543000/id_rsa Username:docker}
	I0925 04:13:37.499528    3913 ssh_runner.go:195] Run: cat /etc/os-release
	I0925 04:13:37.501048    3913 info.go:137] Remote host: Buildroot 2021.02.12
	I0925 04:13:37.501053    3913 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17297-1010/.minikube/addons for local assets ...
	I0925 04:13:37.501136    3913 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17297-1010/.minikube/files for local assets ...
	I0925 04:13:37.501238    3913 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17297-1010/.minikube/files/etc/ssl/certs/14692.pem -> 14692.pem in /etc/ssl/certs
	I0925 04:13:37.501351    3913 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0925 04:13:37.504069    3913 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1010/.minikube/files/etc/ssl/certs/14692.pem --> /etc/ssl/certs/14692.pem (1708 bytes)
	I0925 04:13:37.510703    3913 start.go:303] post-start completed in 51.028416ms
	I0925 04:13:37.511033    3913 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/image-543000/config.json ...
	I0925 04:13:37.511186    3913 start.go:128] duration metric: createHost completed in 17.407477167s
	I0925 04:13:37.511216    3913 main.go:141] libmachine: Using SSH client type: native
	I0925 04:13:37.511431    3913 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100c80760] 0x100c82ed0 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0925 04:13:37.511434    3913 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0925 04:13:37.582459    3913 main.go:141] libmachine: SSH cmd err, output: <nil>: 1695640417.624182128
	
	I0925 04:13:37.582466    3913 fix.go:206] guest clock: 1695640417.624182128
	I0925 04:13:37.582471    3913 fix.go:219] Guest: 2023-09-25 04:13:37.624182128 -0700 PDT Remote: 2023-09-25 04:13:37.511188 -0700 PDT m=+17.511203543 (delta=112.994128ms)
	I0925 04:13:37.582480    3913 fix.go:190] guest clock delta is within tolerance: 112.994128ms
	I0925 04:13:37.582482    3913 start.go:83] releasing machines lock for "image-543000", held for 17.478817s
	I0925 04:13:37.582735    3913 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0925 04:13:37.582735    3913 ssh_runner.go:195] Run: cat /version.json
	I0925 04:13:37.582741    3913 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/image-543000/id_rsa Username:docker}
	I0925 04:13:37.582752    3913 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/image-543000/id_rsa Username:docker}
	I0925 04:13:37.661428    3913 ssh_runner.go:195] Run: systemctl --version
	I0925 04:13:37.663610    3913 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0925 04:13:37.665696    3913 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0925 04:13:37.665728    3913 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0925 04:13:37.671254    3913 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0925 04:13:37.671268    3913 start.go:469] detecting cgroup driver to use...
	I0925 04:13:37.671354    3913 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0925 04:13:37.677230    3913 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0925 04:13:37.680625    3913 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0925 04:13:37.683957    3913 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0925 04:13:37.683978    3913 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0925 04:13:37.687232    3913 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0925 04:13:37.690160    3913 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0925 04:13:37.692973    3913 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0925 04:13:37.696289    3913 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0925 04:13:37.699635    3913 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0925 04:13:37.703256    3913 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0925 04:13:37.705978    3913 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0925 04:13:37.708647    3913 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0925 04:13:37.787335    3913 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0925 04:13:37.795244    3913 start.go:469] detecting cgroup driver to use...
	I0925 04:13:37.795320    3913 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0925 04:13:37.805928    3913 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0925 04:13:37.810620    3913 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0925 04:13:37.816500    3913 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0925 04:13:37.822186    3913 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0925 04:13:37.828518    3913 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0925 04:13:37.872094    3913 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0925 04:13:37.877604    3913 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0925 04:13:37.883154    3913 ssh_runner.go:195] Run: which cri-dockerd
	I0925 04:13:37.884554    3913 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0925 04:13:37.887609    3913 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0925 04:13:37.892671    3913 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0925 04:13:37.969740    3913 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0925 04:13:38.049619    3913 docker.go:554] configuring docker to use "cgroupfs" as cgroup driver...
	I0925 04:13:38.049667    3913 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0925 04:13:38.054896    3913 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0925 04:13:38.127289    3913 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0925 04:13:39.291013    3913 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.163710792s)
	I0925 04:13:39.291074    3913 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0925 04:13:39.365688    3913 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0925 04:13:39.448842    3913 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0925 04:13:39.528611    3913 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0925 04:13:39.601400    3913 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0925 04:13:39.612675    3913 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0925 04:13:39.689083    3913 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0925 04:13:39.717567    3913 start.go:516] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0925 04:13:39.717645    3913 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0925 04:13:39.719887    3913 start.go:537] Will wait 60s for crictl version
	I0925 04:13:39.719927    3913 ssh_runner.go:195] Run: which crictl
	I0925 04:13:39.721318    3913 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0925 04:13:39.741028    3913 start.go:553] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.6
	RuntimeApiVersion:  v1
	I0925 04:13:39.741087    3913 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0925 04:13:39.750677    3913 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0925 04:13:39.766189    3913 out.go:204] * Preparing Kubernetes v1.28.2 on Docker 24.0.6 ...
	I0925 04:13:39.766320    3913 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0925 04:13:39.767711    3913 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0925 04:13:39.771808    3913 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0925 04:13:39.771850    3913 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0925 04:13:39.777061    3913 docker.go:664] Got preloaded images: 
	I0925 04:13:39.777065    3913 docker.go:670] registry.k8s.io/kube-apiserver:v1.28.2 wasn't preloaded
	I0925 04:13:39.777106    3913 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0925 04:13:39.780387    3913 ssh_runner.go:195] Run: which lz4
	I0925 04:13:39.781728    3913 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0925 04:13:39.782915    3913 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0925 04:13:39.782926    3913 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (356993689 bytes)
	I0925 04:13:41.098456    3913 docker.go:628] Took 1.316766 seconds to copy over tarball
	I0925 04:13:41.098513    3913 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0925 04:13:42.129587    3913 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.03106025s)
	I0925 04:13:42.129596    3913 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0925 04:13:42.145046    3913 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0925 04:13:42.148041    3913 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0925 04:13:42.153556    3913 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0925 04:13:42.239197    3913 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0925 04:13:43.722924    3913 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.483711666s)
	I0925 04:13:43.723009    3913 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0925 04:13:43.728955    3913 docker.go:664] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.2
	registry.k8s.io/kube-controller-manager:v1.28.2
	registry.k8s.io/kube-scheduler:v1.28.2
	registry.k8s.io/kube-proxy:v1.28.2
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0925 04:13:43.728962    3913 cache_images.go:84] Images are preloaded, skipping loading
	I0925 04:13:43.729022    3913 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0925 04:13:43.736748    3913 cni.go:84] Creating CNI manager for ""
	I0925 04:13:43.736754    3913 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0925 04:13:43.736763    3913 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0925 04:13:43.736771    3913 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.5 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:image-543000 NodeName:image-543000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0925 04:13:43.736838    3913 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "image-543000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0925 04:13:43.736885    3913 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=image-543000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:image-543000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0925 04:13:43.736939    3913 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I0925 04:13:43.740270    3913 binaries.go:44] Found k8s binaries, skipping transfer
	I0925 04:13:43.740300    3913 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0925 04:13:43.743272    3913 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I0925 04:13:43.748536    3913 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0925 04:13:43.753388    3913 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2098 bytes)
	I0925 04:13:43.758117    3913 ssh_runner.go:195] Run: grep 192.168.105.5	control-plane.minikube.internal$ /etc/hosts
	I0925 04:13:43.759339    3913 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.5	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0925 04:13:43.763024    3913 certs.go:56] Setting up /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/image-543000 for IP: 192.168.105.5
	I0925 04:13:43.763032    3913 certs.go:190] acquiring lock for shared ca certs: {Name:mk095b03680bcdeba6c321a9f458c9fbafa67639 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 04:13:43.763177    3913 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/17297-1010/.minikube/ca.key
	I0925 04:13:43.763214    3913 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/17297-1010/.minikube/proxy-client-ca.key
	I0925 04:13:43.763237    3913 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/image-543000/client.key
	I0925 04:13:43.763244    3913 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/image-543000/client.crt with IP's: []
	I0925 04:13:43.848366    3913 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/image-543000/client.crt ...
	I0925 04:13:43.848371    3913 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/image-543000/client.crt: {Name:mkc189128884acaf11534917dd36cc235b63d70b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 04:13:43.848593    3913 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/image-543000/client.key ...
	I0925 04:13:43.848596    3913 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/image-543000/client.key: {Name:mk45141ce39982665644c8a7b348533870851d49 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 04:13:43.848702    3913 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/image-543000/apiserver.key.e69b33ca
	I0925 04:13:43.848707    3913 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/image-543000/apiserver.crt.e69b33ca with IP's: [192.168.105.5 10.96.0.1 127.0.0.1 10.0.0.1]
	I0925 04:13:44.013974    3913 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/image-543000/apiserver.crt.e69b33ca ...
	I0925 04:13:44.013976    3913 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/image-543000/apiserver.crt.e69b33ca: {Name:mk9b6c0c071e3021095c3d7a8e40c6828f2443e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 04:13:44.014120    3913 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/image-543000/apiserver.key.e69b33ca ...
	I0925 04:13:44.014122    3913 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/image-543000/apiserver.key.e69b33ca: {Name:mkc2c6cb971babcebdf47f1ed789cf9fe9b4a5d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 04:13:44.014213    3913 certs.go:337] copying /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/image-543000/apiserver.crt.e69b33ca -> /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/image-543000/apiserver.crt
	I0925 04:13:44.014443    3913 certs.go:341] copying /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/image-543000/apiserver.key.e69b33ca -> /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/image-543000/apiserver.key
	I0925 04:13:44.014546    3913 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/image-543000/proxy-client.key
	I0925 04:13:44.014552    3913 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/image-543000/proxy-client.crt with IP's: []
	I0925 04:13:44.054922    3913 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/image-543000/proxy-client.crt ...
	I0925 04:13:44.054924    3913 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/image-543000/proxy-client.crt: {Name:mk188f515016af4c843cdd51df3355b9aa7bce5d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 04:13:44.055067    3913 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/image-543000/proxy-client.key ...
	I0925 04:13:44.055070    3913 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/image-543000/proxy-client.key: {Name:mk77f65b7e888a0c1ddf0bf656f6f8de3469bdcc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 04:13:44.055336    3913 certs.go:437] found cert: /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/Users/jenkins/minikube-integration/17297-1010/.minikube/certs/1469.pem (1338 bytes)
	W0925 04:13:44.055363    3913 certs.go:433] ignoring /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/Users/jenkins/minikube-integration/17297-1010/.minikube/certs/1469_empty.pem, impossibly tiny 0 bytes
	I0925 04:13:44.055370    3913 certs.go:437] found cert: /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/Users/jenkins/minikube-integration/17297-1010/.minikube/certs/ca-key.pem (1675 bytes)
	I0925 04:13:44.055393    3913 certs.go:437] found cert: /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/Users/jenkins/minikube-integration/17297-1010/.minikube/certs/ca.pem (1082 bytes)
	I0925 04:13:44.055413    3913 certs.go:437] found cert: /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/Users/jenkins/minikube-integration/17297-1010/.minikube/certs/cert.pem (1123 bytes)
	I0925 04:13:44.055438    3913 certs.go:437] found cert: /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/Users/jenkins/minikube-integration/17297-1010/.minikube/certs/key.pem (1679 bytes)
	I0925 04:13:44.055484    3913 certs.go:437] found cert: /Users/jenkins/minikube-integration/17297-1010/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/17297-1010/.minikube/files/etc/ssl/certs/14692.pem (1708 bytes)
	I0925 04:13:44.055848    3913 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/image-543000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0925 04:13:44.063558    3913 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/image-543000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0925 04:13:44.070653    3913 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/image-543000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0925 04:13:44.077242    3913 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/image-543000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0925 04:13:44.084120    3913 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1010/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0925 04:13:44.091313    3913 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1010/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0925 04:13:44.098153    3913 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1010/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0925 04:13:44.104664    3913 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1010/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0925 04:13:44.111630    3913 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1010/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0925 04:13:44.118403    3913 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/1469.pem --> /usr/share/ca-certificates/1469.pem (1338 bytes)
	I0925 04:13:44.124977    3913 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1010/.minikube/files/etc/ssl/certs/14692.pem --> /usr/share/ca-certificates/14692.pem (1708 bytes)
	I0925 04:13:44.131713    3913 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0925 04:13:44.136627    3913 ssh_runner.go:195] Run: openssl version
	I0925 04:13:44.138648    3913 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1469.pem && ln -fs /usr/share/ca-certificates/1469.pem /etc/ssl/certs/1469.pem"
	I0925 04:13:44.141538    3913 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1469.pem
	I0925 04:13:44.142886    3913 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 25 11:07 /usr/share/ca-certificates/1469.pem
	I0925 04:13:44.142907    3913 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1469.pem
	I0925 04:13:44.144602    3913 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1469.pem /etc/ssl/certs/51391683.0"
	I0925 04:13:44.147768    3913 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14692.pem && ln -fs /usr/share/ca-certificates/14692.pem /etc/ssl/certs/14692.pem"
	I0925 04:13:44.150883    3913 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14692.pem
	I0925 04:13:44.152379    3913 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 25 11:07 /usr/share/ca-certificates/14692.pem
	I0925 04:13:44.152398    3913 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14692.pem
	I0925 04:13:44.154180    3913 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14692.pem /etc/ssl/certs/3ec20f2e.0"
	I0925 04:13:44.156906    3913 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0925 04:13:44.160256    3913 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0925 04:13:44.161755    3913 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 25 10:34 /usr/share/ca-certificates/minikubeCA.pem
	I0925 04:13:44.161775    3913 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0925 04:13:44.163500    3913 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0925 04:13:44.166655    3913 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0925 04:13:44.167899    3913 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0925 04:13:44.167928    3913 kubeadm.go:404] StartCluster: {Name:image-543000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.2 ClusterName:image-543000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0925 04:13:44.167991    3913 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0925 04:13:44.173453    3913 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0925 04:13:44.176360    3913 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0925 04:13:44.179389    3913 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0925 04:13:44.182314    3913 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0925 04:13:44.182327    3913 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0925 04:13:44.204586    3913 kubeadm.go:322] [init] Using Kubernetes version: v1.28.2
	I0925 04:13:44.204609    3913 kubeadm.go:322] [preflight] Running pre-flight checks
	I0925 04:13:44.261707    3913 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0925 04:13:44.261754    3913 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0925 04:13:44.261805    3913 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0925 04:13:44.360310    3913 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0925 04:13:44.369685    3913 out.go:204]   - Generating certificates and keys ...
	I0925 04:13:44.369724    3913 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0925 04:13:44.369772    3913 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0925 04:13:44.464884    3913 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0925 04:13:44.569536    3913 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0925 04:13:44.731209    3913 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0925 04:13:44.841987    3913 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0925 04:13:44.941285    3913 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0925 04:13:44.941338    3913 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [image-543000 localhost] and IPs [192.168.105.5 127.0.0.1 ::1]
	I0925 04:13:45.054094    3913 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0925 04:13:45.054161    3913 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [image-543000 localhost] and IPs [192.168.105.5 127.0.0.1 ::1]
	I0925 04:13:45.166130    3913 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0925 04:13:45.261316    3913 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0925 04:13:45.413143    3913 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0925 04:13:45.413167    3913 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0925 04:13:45.556683    3913 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0925 04:13:45.666281    3913 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0925 04:13:45.858753    3913 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0925 04:13:45.990333    3913 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0925 04:13:45.990605    3913 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0925 04:13:45.992204    3913 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0925 04:13:45.996404    3913 out.go:204]   - Booting up control plane ...
	I0925 04:13:45.996465    3913 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0925 04:13:45.996515    3913 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0925 04:13:45.996552    3913 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0925 04:13:46.000084    3913 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0925 04:13:46.000140    3913 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0925 04:13:46.000165    3913 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0925 04:13:46.085105    3913 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0925 04:13:50.085959    3913 kubeadm.go:322] [apiclient] All control plane components are healthy after 4.001093 seconds
	I0925 04:13:50.086027    3913 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0925 04:13:50.091042    3913 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0925 04:13:50.600452    3913 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0925 04:13:50.600541    3913 kubeadm.go:322] [mark-control-plane] Marking the node image-543000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0925 04:13:51.105138    3913 kubeadm.go:322] [bootstrap-token] Using token: 7pgopr.krvc6fzqy6kd4g5k
	I0925 04:13:51.111297    3913 out.go:204]   - Configuring RBAC rules ...
	I0925 04:13:51.111363    3913 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0925 04:13:51.112150    3913 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0925 04:13:51.119036    3913 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0925 04:13:51.119978    3913 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0925 04:13:51.121209    3913 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0925 04:13:51.122341    3913 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0925 04:13:51.126291    3913 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0925 04:13:51.293252    3913 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0925 04:13:51.515129    3913 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0925 04:13:51.515136    3913 kubeadm.go:322] 
	I0925 04:13:51.515165    3913 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0925 04:13:51.515166    3913 kubeadm.go:322] 
	I0925 04:13:51.515206    3913 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0925 04:13:51.515208    3913 kubeadm.go:322] 
	I0925 04:13:51.515217    3913 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0925 04:13:51.515240    3913 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0925 04:13:51.515268    3913 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0925 04:13:51.515269    3913 kubeadm.go:322] 
	I0925 04:13:51.515291    3913 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0925 04:13:51.515292    3913 kubeadm.go:322] 
	I0925 04:13:51.515326    3913 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0925 04:13:51.515327    3913 kubeadm.go:322] 
	I0925 04:13:51.515354    3913 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0925 04:13:51.515389    3913 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0925 04:13:51.515433    3913 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0925 04:13:51.515435    3913 kubeadm.go:322] 
	I0925 04:13:51.515477    3913 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0925 04:13:51.515509    3913 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0925 04:13:51.515511    3913 kubeadm.go:322] 
	I0925 04:13:51.515549    3913 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 7pgopr.krvc6fzqy6kd4g5k \
	I0925 04:13:51.515594    3913 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:3fc5fb926713648f8638ba10da0d4f45584d32929bcc07af5ada491c000ad47e \
	I0925 04:13:51.515605    3913 kubeadm.go:322] 	--control-plane 
	I0925 04:13:51.515607    3913 kubeadm.go:322] 
	I0925 04:13:51.515662    3913 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0925 04:13:51.515667    3913 kubeadm.go:322] 
	I0925 04:13:51.515708    3913 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 7pgopr.krvc6fzqy6kd4g5k \
	I0925 04:13:51.515772    3913 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:3fc5fb926713648f8638ba10da0d4f45584d32929bcc07af5ada491c000ad47e 
	I0925 04:13:51.515831    3913 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0925 04:13:51.515835    3913 cni.go:84] Creating CNI manager for ""
	I0925 04:13:51.515842    3913 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0925 04:13:51.524267    3913 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0925 04:13:51.528131    3913 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0925 04:13:51.531103    3913 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0925 04:13:51.535868    3913 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0925 04:13:51.535926    3913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=1bf6c3d5317028f348e55ea19d261973a6487d3c minikube.k8s.io/name=image-543000 minikube.k8s.io/updated_at=2023_09_25T04_13_51_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 04:13:51.535928    3913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 04:13:51.595719    3913 ops.go:34] apiserver oom_adj: -16
	I0925 04:13:51.596687    3913 kubeadm.go:1081] duration metric: took 60.790916ms to wait for elevateKubeSystemPrivileges.
	I0925 04:13:51.596694    3913 kubeadm.go:406] StartCluster complete in 7.428762083s
	I0925 04:13:51.596703    3913 settings.go:142] acquiring lock: {Name:mkb5a0822179f07ef9369c44aa9b64eb9ef74eed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 04:13:51.596781    3913 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/17297-1010/kubeconfig
	I0925 04:13:51.597111    3913 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17297-1010/kubeconfig: {Name:mkaa9d09ca2bf27c1a43efc9acf938adcc68343d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 04:13:51.597297    3913 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0925 04:13:51.597346    3913 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0925 04:13:51.597378    3913 addons.go:69] Setting storage-provisioner=true in profile "image-543000"
	I0925 04:13:51.597384    3913 addons.go:231] Setting addon storage-provisioner=true in "image-543000"
	I0925 04:13:51.597403    3913 host.go:66] Checking if "image-543000" exists ...
	I0925 04:13:51.597410    3913 addons.go:69] Setting default-storageclass=true in profile "image-543000"
	I0925 04:13:51.597416    3913 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "image-543000"
	I0925 04:13:51.597477    3913 config.go:182] Loaded profile config "image-543000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0925 04:13:51.602621    3913 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0925 04:13:51.608500    3913 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0925 04:13:51.608505    3913 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0925 04:13:51.608514    3913 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/image-543000/id_rsa Username:docker}
	I0925 04:13:51.612512    3913 addons.go:231] Setting addon default-storageclass=true in "image-543000"
	I0925 04:13:51.612526    3913 host.go:66] Checking if "image-543000" exists ...
	I0925 04:13:51.613176    3913 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0925 04:13:51.613180    3913 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0925 04:13:51.613185    3913 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/image-543000/id_rsa Username:docker}
	I0925 04:13:51.619060    3913 kapi.go:248] "coredns" deployment in "kube-system" namespace and "image-543000" context rescaled to 1 replicas
	I0925 04:13:51.619073    3913 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 04:13:51.626485    3913 out.go:177] * Verifying Kubernetes components...
	I0925 04:13:51.630570    3913 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0925 04:13:51.647854    3913 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0925 04:13:51.648221    3913 api_server.go:52] waiting for apiserver process to appear ...
	I0925 04:13:51.648257    3913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0925 04:13:51.663717    3913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0925 04:13:51.698903    3913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0925 04:13:52.070834    3913 start.go:923] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I0925 04:13:52.070856    3913 api_server.go:72] duration metric: took 451.773083ms to wait for apiserver process to appear ...
	I0925 04:13:52.070861    3913 api_server.go:88] waiting for apiserver healthz status ...
	I0925 04:13:52.070874    3913 api_server.go:253] Checking apiserver healthz at https://192.168.105.5:8443/healthz ...
	I0925 04:13:52.074437    3913 api_server.go:279] https://192.168.105.5:8443/healthz returned 200:
	ok
	I0925 04:13:52.075438    3913 api_server.go:141] control plane version: v1.28.2
	I0925 04:13:52.075444    3913 api_server.go:131] duration metric: took 4.58025ms to wait for apiserver health ...
	I0925 04:13:52.075448    3913 system_pods.go:43] waiting for kube-system pods to appear ...
	I0925 04:13:52.078125    3913 system_pods.go:59] 4 kube-system pods found
	I0925 04:13:52.078131    3913 system_pods.go:61] "etcd-image-543000" [4bc87a4d-c60d-432a-8516-e1f458b26f83] Pending
	I0925 04:13:52.078133    3913 system_pods.go:61] "kube-apiserver-image-543000" [1f7128ab-3b99-4394-a129-f7c410a8a245] Pending
	I0925 04:13:52.078135    3913 system_pods.go:61] "kube-controller-manager-image-543000" [f78b12a7-282c-4c6d-9f7c-61214a717ec7] Pending
	I0925 04:13:52.078136    3913 system_pods.go:61] "kube-scheduler-image-543000" [ccc2552e-0671-4d70-9f9c-f29ed30386cd] Pending
	I0925 04:13:52.078138    3913 system_pods.go:74] duration metric: took 2.68875ms to wait for pod list to return data ...
	I0925 04:13:52.078141    3913 kubeadm.go:581] duration metric: took 459.060083ms to wait for : map[apiserver:true system_pods:true] ...
	I0925 04:13:52.078146    3913 node_conditions.go:102] verifying NodePressure condition ...
	I0925 04:13:52.079464    3913 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0925 04:13:52.079470    3913 node_conditions.go:123] node cpu capacity is 2
	I0925 04:13:52.079475    3913 node_conditions.go:105] duration metric: took 1.3275ms to run NodePressure ...
	I0925 04:13:52.079479    3913 start.go:228] waiting for startup goroutines ...
	I0925 04:13:52.163939    3913 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0925 04:13:52.167995    3913 addons.go:502] enable addons completed in 570.64925ms: enabled=[storage-provisioner default-storageclass]
	I0925 04:13:52.168006    3913 start.go:233] waiting for cluster config update ...
	I0925 04:13:52.168010    3913 start.go:242] writing updated cluster config ...
	I0925 04:13:52.168239    3913 ssh_runner.go:195] Run: rm -f paused
	I0925 04:13:52.195401    3913 start.go:600] kubectl: 1.27.2, cluster: 1.28.2 (minor skew: 1)
	I0925 04:13:52.199985    3913 out.go:177] * Done! kubectl is now configured to use "image-543000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Mon 2023-09-25 11:13:31 UTC, ends at Mon 2023-09-25 11:13:53 UTC. --
	Sep 25 11:13:47 image-543000 dockerd[1123]: time="2023-09-25T11:13:47.221401299Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 25 11:13:47 image-543000 dockerd[1123]: time="2023-09-25T11:13:47.221412841Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 25 11:13:47 image-543000 dockerd[1123]: time="2023-09-25T11:13:47.252439924Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 25 11:13:47 image-543000 dockerd[1123]: time="2023-09-25T11:13:47.252469799Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 25 11:13:47 image-543000 dockerd[1123]: time="2023-09-25T11:13:47.252534716Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 25 11:13:47 image-543000 dockerd[1123]: time="2023-09-25T11:13:47.252558132Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 25 11:13:47 image-543000 dockerd[1123]: time="2023-09-25T11:13:47.252569007Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 25 11:13:47 image-543000 dockerd[1123]: time="2023-09-25T11:13:47.252577632Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 25 11:13:47 image-543000 dockerd[1123]: time="2023-09-25T11:13:47.252599549Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 25 11:13:47 image-543000 dockerd[1123]: time="2023-09-25T11:13:47.252608466Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 25 11:13:47 image-543000 dockerd[1123]: time="2023-09-25T11:13:47.256427841Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 25 11:13:47 image-543000 dockerd[1123]: time="2023-09-25T11:13:47.256475549Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 25 11:13:47 image-543000 dockerd[1123]: time="2023-09-25T11:13:47.256492716Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 25 11:13:47 image-543000 dockerd[1123]: time="2023-09-25T11:13:47.256503382Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 25 11:13:53 image-543000 dockerd[1117]: time="2023-09-25T11:13:53.214493927Z" level=info msg="Layer sha256:5e5d01bb2a8d3e34816f24ff1a055b5d084e5a5a1919cd77684120916d61c3eb cleaned up"
	Sep 25 11:13:53 image-543000 dockerd[1117]: time="2023-09-25T11:13:53.342140385Z" level=info msg="Layer sha256:5e5d01bb2a8d3e34816f24ff1a055b5d084e5a5a1919cd77684120916d61c3eb cleaned up"
	Sep 25 11:13:53 image-543000 dockerd[1117]: time="2023-09-25T11:13:53.360512552Z" level=info msg="Layer sha256:5e5d01bb2a8d3e34816f24ff1a055b5d084e5a5a1919cd77684120916d61c3eb cleaned up"
	Sep 25 11:13:53 image-543000 dockerd[1123]: time="2023-09-25T11:13:53.390037760Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 25 11:13:53 image-543000 dockerd[1123]: time="2023-09-25T11:13:53.390066219Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 25 11:13:53 image-543000 dockerd[1123]: time="2023-09-25T11:13:53.390072219Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 25 11:13:53 image-543000 dockerd[1123]: time="2023-09-25T11:13:53.390076344Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 25 11:13:53 image-543000 dockerd[1123]: time="2023-09-25T11:13:53.539660635Z" level=info msg="shim disconnected" id=c84876e9f9d16a4ec48fa83cad0ac8908b7642cc4f8e925e503f7edd0dece570 namespace=moby
	Sep 25 11:13:53 image-543000 dockerd[1117]: time="2023-09-25T11:13:53.539812469Z" level=info msg="ignoring event" container=c84876e9f9d16a4ec48fa83cad0ac8908b7642cc4f8e925e503f7edd0dece570 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 25 11:13:53 image-543000 dockerd[1123]: time="2023-09-25T11:13:53.540040094Z" level=warning msg="cleaning up after shim disconnected" id=c84876e9f9d16a4ec48fa83cad0ac8908b7642cc4f8e925e503f7edd0dece570 namespace=moby
	Sep 25 11:13:53 image-543000 dockerd[1123]: time="2023-09-25T11:13:53.540060802Z" level=info msg="cleaning up dead shim" namespace=moby
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	9eb5417edee59       64fc40cee3716       6 seconds ago       Running             kube-scheduler            0                   c2161fe4a4197       kube-scheduler-image-543000
	09a6d89063c8c       9cdd6470f48c8       6 seconds ago       Running             etcd                      0                   9656df7a90551       etcd-image-543000
	82c61181bc12a       89d57b83c1786       6 seconds ago       Running             kube-controller-manager   0                   8985c782ba0c2       kube-controller-manager-image-543000
	a1e56c508787a       30bb499447fe1       6 seconds ago       Running             kube-apiserver            0                   eb7a04ea5cdc4       kube-apiserver-image-543000
	
	* 
	* ==> describe nodes <==
	* Name:               image-543000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=image-543000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1bf6c3d5317028f348e55ea19d261973a6487d3c
	                    minikube.k8s.io/name=image-543000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_25T04_13_51_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 25 Sep 2023 11:13:49 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  image-543000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 25 Sep 2023 11:13:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 25 Sep 2023 11:13:51 +0000   Mon, 25 Sep 2023 11:13:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 25 Sep 2023 11:13:51 +0000   Mon, 25 Sep 2023 11:13:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 25 Sep 2023 11:13:51 +0000   Mon, 25 Sep 2023 11:13:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Mon, 25 Sep 2023 11:13:51 +0000   Mon, 25 Sep 2023 11:13:48 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
	Addresses:
	  InternalIP:  192.168.105.5
	  Hostname:    image-543000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	System Info:
	  Machine ID:                 362d6e52e3dd4ed3983e73fd46c11651
	  System UUID:                362d6e52e3dd4ed3983e73fd46c11651
	  Boot ID:                    036b5f88-8efe-4804-98a1-6e3811ab0527
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.6
	  Kubelet Version:            v1.28.2
	  Kube-Proxy Version:         v1.28.2
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-image-543000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         2s
	  kube-system                 kube-apiserver-image-543000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2s
	  kube-system                 kube-controller-manager-image-543000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2s
	  kube-system                 kube-scheduler-image-543000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             100Mi (2%!)(MISSING)  0 (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age              From     Message
	  ----    ------                   ----             ----     -------
	  Normal  Starting                 7s               kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  7s (x8 over 7s)  kubelet  Node image-543000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7s (x8 over 7s)  kubelet  Node image-543000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7s (x7 over 7s)  kubelet  Node image-543000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7s               kubelet  Updated Node Allocatable limit across pods
	  Normal  Starting                 2s               kubelet  Starting kubelet.
	  Normal  NodeAllocatableEnforced  2s               kubelet  Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2s               kubelet  Node image-543000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2s               kubelet  Node image-543000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2s               kubelet  Node image-543000 status is now: NodeHasSufficientPID
	
	* 
	* ==> dmesg <==
	* [Sep25 11:13] ACPI: SRAT not present
	[  +0.000000] KASLR disabled due to lack of seed
	[  +0.646000] EINJ: EINJ table not found.
	[  +0.551187] systemd-fstab-generator[117]: Ignoring "noauto" for root device
	[  +0.044855] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000797] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +6.174044] systemd-fstab-generator[493]: Ignoring "noauto" for root device
	[  +0.089853] systemd-fstab-generator[504]: Ignoring "noauto" for root device
	[  +0.464222] systemd-fstab-generator[682]: Ignoring "noauto" for root device
	[  +0.183762] systemd-fstab-generator[748]: Ignoring "noauto" for root device
	[  +0.077573] systemd-fstab-generator[759]: Ignoring "noauto" for root device
	[  +0.078698] systemd-fstab-generator[772]: Ignoring "noauto" for root device
	[  +1.237586] systemd-fstab-generator[930]: Ignoring "noauto" for root device
	[  +0.085583] systemd-fstab-generator[941]: Ignoring "noauto" for root device
	[  +0.078067] systemd-fstab-generator[952]: Ignoring "noauto" for root device
	[  +0.074019] systemd-fstab-generator[963]: Ignoring "noauto" for root device
	[  +0.086304] systemd-fstab-generator[1005]: Ignoring "noauto" for root device
	[  +2.548523] systemd-fstab-generator[1110]: Ignoring "noauto" for root device
	[  +1.460704] kauditd_printk_skb: 53 callbacks suppressed
	[  +2.382390] systemd-fstab-generator[1493]: Ignoring "noauto" for root device
	[  +5.132379] systemd-fstab-generator[2347]: Ignoring "noauto" for root device
	[  +2.159795] kauditd_printk_skb: 41 callbacks suppressed
	
	* 
	* ==> etcd [09a6d89063c8] <==
	* {"level":"info","ts":"2023-09-25T11:13:47.468165Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-25T11:13:47.468184Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-25T11:13:47.468187Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-25T11:13:47.46827Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.105.5:2380"}
	{"level":"info","ts":"2023-09-25T11:13:47.468276Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.105.5:2380"}
	{"level":"info","ts":"2023-09-25T11:13:47.47141Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 switched to configuration voters=(6403572207504089856)"}
	{"level":"info","ts":"2023-09-25T11:13:47.47145Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"cd5c0afff2184bea","local-member-id":"58de0efec1d86300","added-peer-id":"58de0efec1d86300","added-peer-peer-urls":["https://192.168.105.5:2380"]}
	{"level":"info","ts":"2023-09-25T11:13:48.325299Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 is starting a new election at term 1"}
	{"level":"info","ts":"2023-09-25T11:13:48.325396Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-09-25T11:13:48.325424Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 received MsgPreVoteResp from 58de0efec1d86300 at term 1"}
	{"level":"info","ts":"2023-09-25T11:13:48.325445Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 became candidate at term 2"}
	{"level":"info","ts":"2023-09-25T11:13:48.325472Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 received MsgVoteResp from 58de0efec1d86300 at term 2"}
	{"level":"info","ts":"2023-09-25T11:13:48.325492Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 became leader at term 2"}
	{"level":"info","ts":"2023-09-25T11:13:48.325523Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 58de0efec1d86300 elected leader 58de0efec1d86300 at term 2"}
	{"level":"info","ts":"2023-09-25T11:13:48.326214Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-25T11:13:48.326549Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"58de0efec1d86300","local-member-attributes":"{Name:image-543000 ClientURLs:[https://192.168.105.5:2379]}","request-path":"/0/members/58de0efec1d86300/attributes","cluster-id":"cd5c0afff2184bea","publish-timeout":"7s"}
	{"level":"info","ts":"2023-09-25T11:13:48.326675Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"cd5c0afff2184bea","local-member-id":"58de0efec1d86300","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-25T11:13:48.326717Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-25T11:13:48.326738Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-25T11:13:48.326757Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-25T11:13:48.327207Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-09-25T11:13:48.327257Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-25T11:13:48.3276Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.5:2379"}
	{"level":"info","ts":"2023-09-25T11:13:48.335265Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-25T11:13:48.335314Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> kernel <==
	*  11:13:54 up 0 min,  0 users,  load average: 0.29, 0.07, 0.02
	Linux image-543000 5.10.57 #1 SMP PREEMPT Mon Sep 18 20:10:16 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [a1e56c508787] <==
	* I0925 11:13:48.988073       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0925 11:13:48.988654       1 controller.go:624] quota admission added evaluator for: namespaces
	I0925 11:13:49.005169       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0925 11:13:49.005187       1 aggregator.go:166] initial CRD sync complete...
	I0925 11:13:49.005235       1 autoregister_controller.go:141] Starting autoregister controller
	I0925 11:13:49.005243       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0925 11:13:49.005261       1 cache.go:39] Caches are synced for autoregister controller
	I0925 11:13:49.011828       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0925 11:13:49.011856       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0925 11:13:49.012734       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0925 11:13:49.012758       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0925 11:13:49.021114       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0925 11:13:49.915121       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0925 11:13:49.916524       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0925 11:13:49.916533       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0925 11:13:50.056418       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0925 11:13:50.072872       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0925 11:13:50.120392       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0925 11:13:50.122897       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.105.5]
	I0925 11:13:50.123364       1 controller.go:624] quota admission added evaluator for: endpoints
	I0925 11:13:50.124674       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0925 11:13:50.946493       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0925 11:13:51.330805       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0925 11:13:51.334607       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0925 11:13:51.338473       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	
	* 
	* ==> kube-controller-manager [82c61181bc12] <==
	* I0925 11:13:50.970053       1 node_lifecycle_controller.go:431] "Controller will reconcile labels"
	I0925 11:13:50.970075       1 controllermanager.go:642] "Started controller" controller="node-lifecycle-controller"
	I0925 11:13:50.970108       1 node_lifecycle_controller.go:465] "Sending events to api server"
	I0925 11:13:50.970122       1 node_lifecycle_controller.go:476] "Starting node controller"
	I0925 11:13:50.970129       1 shared_informer.go:311] Waiting for caches to sync for taint
	I0925 11:13:50.972331       1 controllermanager.go:642] "Started controller" controller="replicationcontroller-controller"
	I0925 11:13:50.972393       1 replica_set.go:214] "Starting controller" name="replicationcontroller"
	I0925 11:13:50.972401       1 shared_informer.go:311] Waiting for caches to sync for ReplicationController
	I0925 11:13:50.995281       1 controllermanager.go:642] "Started controller" controller="pod-garbage-collector-controller"
	I0925 11:13:50.995339       1 gc_controller.go:103] "Starting GC controller"
	I0925 11:13:50.995375       1 shared_informer.go:311] Waiting for caches to sync for GC
	I0925 11:13:51.043348       1 shared_informer.go:318] Caches are synced for tokens
	I0925 11:13:51.245780       1 garbagecollector.go:155] "Starting controller" controller="garbagecollector"
	I0925 11:13:51.245792       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0925 11:13:51.245829       1 controllermanager.go:642] "Started controller" controller="garbage-collector-controller"
	I0925 11:13:51.245855       1 graph_builder.go:294] "Running" component="GraphBuilder"
	I0925 11:13:51.495111       1 controllermanager.go:642] "Started controller" controller="deployment-controller"
	I0925 11:13:51.495167       1 deployment_controller.go:168] "Starting controller" controller="deployment"
	I0925 11:13:51.495175       1 shared_informer.go:311] Waiting for caches to sync for deployment
	I0925 11:13:51.795995       1 controllermanager.go:642] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0925 11:13:51.796127       1 horizontal.go:200] "Starting HPA controller"
	I0925 11:13:51.796181       1 shared_informer.go:311] Waiting for caches to sync for HPA
	I0925 11:13:51.951750       1 controllermanager.go:642] "Started controller" controller="statefulset-controller"
	I0925 11:13:51.951924       1 stateful_set.go:161] "Starting stateful set controller"
	I0925 11:13:51.951986       1 shared_informer.go:311] Waiting for caches to sync for stateful set
	
	* 
	* ==> kube-scheduler [9eb5417edee5] <==
	* E0925 11:13:48.971018       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0925 11:13:48.970867       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0925 11:13:48.971031       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0925 11:13:48.971056       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0925 11:13:48.971081       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0925 11:13:48.971082       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0925 11:13:48.971093       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0925 11:13:48.971101       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0925 11:13:48.971108       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0925 11:13:48.971115       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0925 11:13:48.971153       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0925 11:13:48.971160       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0925 11:13:49.930917       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0925 11:13:49.930934       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0925 11:13:49.949781       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0925 11:13:49.949861       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0925 11:13:49.979319       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0925 11:13:49.979339       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0925 11:13:49.993151       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0925 11:13:49.993180       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0925 11:13:49.994699       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0925 11:13:49.994754       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0925 11:13:50.072736       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0925 11:13:50.072791       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0925 11:13:51.868219       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-09-25 11:13:31 UTC, ends at Mon 2023-09-25 11:13:54 UTC. --
	Sep 25 11:13:51 image-543000 kubelet[2353]: I0925 11:13:51.481069    2353 kubelet_node_status.go:108] "Node was previously registered" node="image-543000"
	Sep 25 11:13:51 image-543000 kubelet[2353]: I0925 11:13:51.481124    2353 kubelet_node_status.go:73] "Successfully registered node" node="image-543000"
	Sep 25 11:13:51 image-543000 kubelet[2353]: I0925 11:13:51.497211    2353 topology_manager.go:215] "Topology Admit Handler" podUID="75a5b5953c1d24babaf4a7e76874e73e" podNamespace="kube-system" podName="etcd-image-543000"
	Sep 25 11:13:51 image-543000 kubelet[2353]: I0925 11:13:51.497271    2353 topology_manager.go:215] "Topology Admit Handler" podUID="7021ab48ad6e5ec786a25291621f40a9" podNamespace="kube-system" podName="kube-apiserver-image-543000"
	Sep 25 11:13:51 image-543000 kubelet[2353]: I0925 11:13:51.497287    2353 topology_manager.go:215] "Topology Admit Handler" podUID="e8ef2ff8eee61e08b76b8599f293af4a" podNamespace="kube-system" podName="kube-controller-manager-image-543000"
	Sep 25 11:13:51 image-543000 kubelet[2353]: I0925 11:13:51.497298    2353 topology_manager.go:215] "Topology Admit Handler" podUID="179d9f91510f692bb8ff88b4e60cbf85" podNamespace="kube-system" podName="kube-scheduler-image-543000"
	Sep 25 11:13:51 image-543000 kubelet[2353]: E0925 11:13:51.502489    2353 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-image-543000\" already exists" pod="kube-system/kube-apiserver-image-543000"
	Sep 25 11:13:51 image-543000 kubelet[2353]: I0925 11:13:51.675800    2353 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/75a5b5953c1d24babaf4a7e76874e73e-etcd-certs\") pod \"etcd-image-543000\" (UID: \"75a5b5953c1d24babaf4a7e76874e73e\") " pod="kube-system/etcd-image-543000"
	Sep 25 11:13:51 image-543000 kubelet[2353]: I0925 11:13:51.675819    2353 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/75a5b5953c1d24babaf4a7e76874e73e-etcd-data\") pod \"etcd-image-543000\" (UID: \"75a5b5953c1d24babaf4a7e76874e73e\") " pod="kube-system/etcd-image-543000"
	Sep 25 11:13:51 image-543000 kubelet[2353]: I0925 11:13:51.675829    2353 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7021ab48ad6e5ec786a25291621f40a9-ca-certs\") pod \"kube-apiserver-image-543000\" (UID: \"7021ab48ad6e5ec786a25291621f40a9\") " pod="kube-system/kube-apiserver-image-543000"
	Sep 25 11:13:51 image-543000 kubelet[2353]: I0925 11:13:51.675838    2353 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e8ef2ff8eee61e08b76b8599f293af4a-ca-certs\") pod \"kube-controller-manager-image-543000\" (UID: \"e8ef2ff8eee61e08b76b8599f293af4a\") " pod="kube-system/kube-controller-manager-image-543000"
	Sep 25 11:13:51 image-543000 kubelet[2353]: I0925 11:13:51.675849    2353 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e8ef2ff8eee61e08b76b8599f293af4a-flexvolume-dir\") pod \"kube-controller-manager-image-543000\" (UID: \"e8ef2ff8eee61e08b76b8599f293af4a\") " pod="kube-system/kube-controller-manager-image-543000"
	Sep 25 11:13:51 image-543000 kubelet[2353]: I0925 11:13:51.675858    2353 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e8ef2ff8eee61e08b76b8599f293af4a-kubeconfig\") pod \"kube-controller-manager-image-543000\" (UID: \"e8ef2ff8eee61e08b76b8599f293af4a\") " pod="kube-system/kube-controller-manager-image-543000"
	Sep 25 11:13:51 image-543000 kubelet[2353]: I0925 11:13:51.675868    2353 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e8ef2ff8eee61e08b76b8599f293af4a-usr-share-ca-certificates\") pod \"kube-controller-manager-image-543000\" (UID: \"e8ef2ff8eee61e08b76b8599f293af4a\") " pod="kube-system/kube-controller-manager-image-543000"
	Sep 25 11:13:51 image-543000 kubelet[2353]: I0925 11:13:51.675876    2353 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7021ab48ad6e5ec786a25291621f40a9-k8s-certs\") pod \"kube-apiserver-image-543000\" (UID: \"7021ab48ad6e5ec786a25291621f40a9\") " pod="kube-system/kube-apiserver-image-543000"
	Sep 25 11:13:51 image-543000 kubelet[2353]: I0925 11:13:51.675888    2353 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7021ab48ad6e5ec786a25291621f40a9-usr-share-ca-certificates\") pod \"kube-apiserver-image-543000\" (UID: \"7021ab48ad6e5ec786a25291621f40a9\") " pod="kube-system/kube-apiserver-image-543000"
	Sep 25 11:13:51 image-543000 kubelet[2353]: I0925 11:13:51.675896    2353 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e8ef2ff8eee61e08b76b8599f293af4a-k8s-certs\") pod \"kube-controller-manager-image-543000\" (UID: \"e8ef2ff8eee61e08b76b8599f293af4a\") " pod="kube-system/kube-controller-manager-image-543000"
	Sep 25 11:13:51 image-543000 kubelet[2353]: I0925 11:13:51.675908    2353 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/179d9f91510f692bb8ff88b4e60cbf85-kubeconfig\") pod \"kube-scheduler-image-543000\" (UID: \"179d9f91510f692bb8ff88b4e60cbf85\") " pod="kube-system/kube-scheduler-image-543000"
	Sep 25 11:13:52 image-543000 kubelet[2353]: I0925 11:13:52.371059    2353 apiserver.go:52] "Watching apiserver"
	Sep 25 11:13:52 image-543000 kubelet[2353]: I0925 11:13:52.374865    2353 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Sep 25 11:13:52 image-543000 kubelet[2353]: E0925 11:13:52.436323    2353 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-image-543000\" already exists" pod="kube-system/kube-apiserver-image-543000"
	Sep 25 11:13:52 image-543000 kubelet[2353]: I0925 11:13:52.440356    2353 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-image-543000" podStartSLOduration=1.440324635 podCreationTimestamp="2023-09-25 11:13:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-09-25 11:13:52.436550635 +0000 UTC m=+1.116584835" watchObservedRunningTime="2023-09-25 11:13:52.440324635 +0000 UTC m=+1.120358793"
	Sep 25 11:13:52 image-543000 kubelet[2353]: I0925 11:13:52.440854    2353 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/etcd-image-543000" podStartSLOduration=1.440844427 podCreationTimestamp="2023-09-25 11:13:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-09-25 11:13:52.440277843 +0000 UTC m=+1.120312043" watchObservedRunningTime="2023-09-25 11:13:52.440844427 +0000 UTC m=+1.120878626"
	Sep 25 11:13:52 image-543000 kubelet[2353]: I0925 11:13:52.443695    2353 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-image-543000" podStartSLOduration=1.443678552 podCreationTimestamp="2023-09-25 11:13:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-09-25 11:13:52.44359726 +0000 UTC m=+1.123631418" watchObservedRunningTime="2023-09-25 11:13:52.443678552 +0000 UTC m=+1.123712710"
	Sep 25 11:13:52 image-543000 kubelet[2353]: I0925 11:13:52.451111    2353 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-image-543000" podStartSLOduration=1.451085468 podCreationTimestamp="2023-09-25 11:13:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-09-25 11:13:52.44722001 +0000 UTC m=+1.127254210" watchObservedRunningTime="2023-09-25 11:13:52.451085468 +0000 UTC m=+1.131119626"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p image-543000 -n image-543000
helpers_test.go:261: (dbg) Run:  kubectl --context image-543000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: storage-provisioner
helpers_test.go:274: ======> post-mortem[TestImageBuild/serial/BuildWithBuildArg]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context image-543000 describe pod storage-provisioner
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context image-543000 describe pod storage-provisioner: exit status 1 (37.618417ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context image-543000 describe pod storage-provisioner: exit status 1
--- FAIL: TestImageBuild/serial/BuildWithBuildArg (1.09s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (53.95s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-907000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
E0925 04:15:26.841402    1469 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/client.crt: no such file or directory
addons_test.go:183: (dbg) Done: kubectl --context ingress-addon-legacy-907000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (16.222352333s)
addons_test.go:208: (dbg) Run:  kubectl --context ingress-addon-legacy-907000 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context ingress-addon-legacy-907000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [72eaa571-d81a-4913-8392-07ffdb6314ce] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [72eaa571-d81a-4913-8392-07ffdb6314ce] Running
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 11.016073833s
addons_test.go:238: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-907000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Run:  kubectl --context ingress-addon-legacy-907000 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-907000 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.105.6
E0925 04:15:40.641298    1469 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/functional-742000/client.crt: no such file or directory
addons_test.go:273: (dbg) Non-zero exit: nslookup hello-john.test 192.168.105.6: exit status 1 (15.035962125s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	

                                                
                                                
-- /stdout --
addons_test.go:275: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.105.6" : exit status 1
addons_test.go:279: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-907000 addons disable ingress-dns --alsologtostderr -v=1
E0925 04:15:54.548261    1469 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/client.crt: no such file or directory
addons_test.go:282: (dbg) Done: out/minikube-darwin-arm64 -p ingress-addon-legacy-907000 addons disable ingress-dns --alsologtostderr -v=1: (3.323515708s)
addons_test.go:287: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-907000 addons disable ingress --alsologtostderr -v=1
addons_test.go:287: (dbg) Done: out/minikube-darwin-arm64 -p ingress-addon-legacy-907000 addons disable ingress --alsologtostderr -v=1: (7.098492959s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ingress-addon-legacy-907000 -n ingress-addon-legacy-907000
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-907000 logs -n 25
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                   Args                   |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| ssh            | functional-742000 ssh sudo cat           | functional-742000           | jenkins | v1.31.2 | 25 Sep 23 04:11 PDT | 25 Sep 23 04:11 PDT |
	|                | /etc/test/nested/copy/1469/hosts         |                             |         |         |                     |                     |
	| image          | functional-742000                        | functional-742000           | jenkins | v1.31.2 | 25 Sep 23 04:11 PDT | 25 Sep 23 04:11 PDT |
	|                | image ls --format short                  |                             |         |         |                     |                     |
	|                | --alsologtostderr                        |                             |         |         |                     |                     |
	| image          | functional-742000                        | functional-742000           | jenkins | v1.31.2 | 25 Sep 23 04:11 PDT | 25 Sep 23 04:11 PDT |
	|                | image ls --format yaml                   |                             |         |         |                     |                     |
	|                | --alsologtostderr                        |                             |         |         |                     |                     |
	| ssh            | functional-742000 ssh pgrep              | functional-742000           | jenkins | v1.31.2 | 25 Sep 23 04:11 PDT |                     |
	|                | buildkitd                                |                             |         |         |                     |                     |
	| image          | functional-742000 image build -t         | functional-742000           | jenkins | v1.31.2 | 25 Sep 23 04:11 PDT | 25 Sep 23 04:11 PDT |
	|                | localhost/my-image:functional-742000     |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr         |                             |         |         |                     |                     |
	| image          | functional-742000 image ls               | functional-742000           | jenkins | v1.31.2 | 25 Sep 23 04:11 PDT | 25 Sep 23 04:11 PDT |
	| image          | functional-742000                        | functional-742000           | jenkins | v1.31.2 | 25 Sep 23 04:11 PDT | 25 Sep 23 04:11 PDT |
	|                | image ls --format json                   |                             |         |         |                     |                     |
	|                | --alsologtostderr                        |                             |         |         |                     |                     |
	| image          | functional-742000                        | functional-742000           | jenkins | v1.31.2 | 25 Sep 23 04:11 PDT | 25 Sep 23 04:11 PDT |
	|                | image ls --format table                  |                             |         |         |                     |                     |
	|                | --alsologtostderr                        |                             |         |         |                     |                     |
	| update-context | functional-742000                        | functional-742000           | jenkins | v1.31.2 | 25 Sep 23 04:11 PDT | 25 Sep 23 04:11 PDT |
	|                | update-context                           |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                   |                             |         |         |                     |                     |
	| update-context | functional-742000                        | functional-742000           | jenkins | v1.31.2 | 25 Sep 23 04:11 PDT | 25 Sep 23 04:11 PDT |
	|                | update-context                           |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                   |                             |         |         |                     |                     |
	| update-context | functional-742000                        | functional-742000           | jenkins | v1.31.2 | 25 Sep 23 04:11 PDT | 25 Sep 23 04:11 PDT |
	|                | update-context                           |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                   |                             |         |         |                     |                     |
	| delete         | -p functional-742000                     | functional-742000           | jenkins | v1.31.2 | 25 Sep 23 04:13 PDT | 25 Sep 23 04:13 PDT |
	| start          | -p image-543000 --driver=qemu2           | image-543000                | jenkins | v1.31.2 | 25 Sep 23 04:13 PDT | 25 Sep 23 04:13 PDT |
	|                |                                          |                             |         |         |                     |                     |
	| image          | build -t aaa:latest                      | image-543000                | jenkins | v1.31.2 | 25 Sep 23 04:13 PDT | 25 Sep 23 04:13 PDT |
	|                | ./testdata/image-build/test-normal       |                             |         |         |                     |                     |
	|                | -p image-543000                          |                             |         |         |                     |                     |
	| image          | build -t aaa:latest                      | image-543000                | jenkins | v1.31.2 | 25 Sep 23 04:13 PDT | 25 Sep 23 04:13 PDT |
	|                | --build-opt=build-arg=ENV_A=test_env_str |                             |         |         |                     |                     |
	|                | --build-opt=no-cache                     |                             |         |         |                     |                     |
	|                | ./testdata/image-build/test-arg -p       |                             |         |         |                     |                     |
	|                | image-543000                             |                             |         |         |                     |                     |
	| image          | build -t aaa:latest                      | image-543000                | jenkins | v1.31.2 | 25 Sep 23 04:13 PDT | 25 Sep 23 04:13 PDT |
	|                | ./testdata/image-build/test-normal       |                             |         |         |                     |                     |
	|                | --build-opt=no-cache -p                  |                             |         |         |                     |                     |
	|                | image-543000                             |                             |         |         |                     |                     |
	| image          | build -t aaa:latest                      | image-543000                | jenkins | v1.31.2 | 25 Sep 23 04:13 PDT | 25 Sep 23 04:13 PDT |
	|                | -f inner/Dockerfile                      |                             |         |         |                     |                     |
	|                | ./testdata/image-build/test-f            |                             |         |         |                     |                     |
	|                | -p image-543000                          |                             |         |         |                     |                     |
	| delete         | -p image-543000                          | image-543000                | jenkins | v1.31.2 | 25 Sep 23 04:13 PDT | 25 Sep 23 04:13 PDT |
	| start          | -p ingress-addon-legacy-907000           | ingress-addon-legacy-907000 | jenkins | v1.31.2 | 25 Sep 23 04:13 PDT | 25 Sep 23 04:14 PDT |
	|                | --kubernetes-version=v1.18.20            |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true                |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                   |                             |         |         |                     |                     |
	|                | --driver=qemu2                           |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-907000              | ingress-addon-legacy-907000 | jenkins | v1.31.2 | 25 Sep 23 04:14 PDT | 25 Sep 23 04:15 PDT |
	|                | addons enable ingress                    |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                   |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-907000              | ingress-addon-legacy-907000 | jenkins | v1.31.2 | 25 Sep 23 04:15 PDT | 25 Sep 23 04:15 PDT |
	|                | addons enable ingress-dns                |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                   |                             |         |         |                     |                     |
	| ssh            | ingress-addon-legacy-907000              | ingress-addon-legacy-907000 | jenkins | v1.31.2 | 25 Sep 23 04:15 PDT | 25 Sep 23 04:15 PDT |
	|                | ssh curl -s http://127.0.0.1/            |                             |         |         |                     |                     |
	|                | -H 'Host: nginx.example.com'             |                             |         |         |                     |                     |
	| ip             | ingress-addon-legacy-907000 ip           | ingress-addon-legacy-907000 | jenkins | v1.31.2 | 25 Sep 23 04:15 PDT | 25 Sep 23 04:15 PDT |
	| addons         | ingress-addon-legacy-907000              | ingress-addon-legacy-907000 | jenkins | v1.31.2 | 25 Sep 23 04:15 PDT | 25 Sep 23 04:15 PDT |
	|                | addons disable ingress-dns               |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                   |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-907000              | ingress-addon-legacy-907000 | jenkins | v1.31.2 | 25 Sep 23 04:15 PDT | 25 Sep 23 04:16 PDT |
	|                | addons disable ingress                   |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                   |                             |         |         |                     |                     |
	|----------------|------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/25 04:13:54
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.21.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0925 04:13:54.672113    3959 out.go:296] Setting OutFile to fd 1 ...
	I0925 04:13:54.672233    3959 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 04:13:54.672236    3959 out.go:309] Setting ErrFile to fd 2...
	I0925 04:13:54.672238    3959 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 04:13:54.672365    3959 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17297-1010/.minikube/bin
	I0925 04:13:54.673432    3959 out.go:303] Setting JSON to false
	I0925 04:13:54.689041    3959 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2609,"bootTime":1695637825,"procs":423,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.6","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0925 04:13:54.689125    3959 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0925 04:13:54.692738    3959 out.go:177] * [ingress-addon-legacy-907000] minikube v1.31.2 on Darwin 13.6 (arm64)
	I0925 04:13:54.699740    3959 out.go:177]   - MINIKUBE_LOCATION=17297
	I0925 04:13:54.699845    3959 notify.go:220] Checking for updates...
	I0925 04:13:54.702723    3959 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17297-1010/kubeconfig
	I0925 04:13:54.705729    3959 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0925 04:13:54.708698    3959 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 04:13:54.711731    3959 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17297-1010/.minikube
	I0925 04:13:54.714687    3959 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0925 04:13:54.717904    3959 driver.go:373] Setting default libvirt URI to qemu:///system
	I0925 04:13:54.721715    3959 out.go:177] * Using the qemu2 driver based on user configuration
	I0925 04:13:54.728656    3959 start.go:298] selected driver: qemu2
	I0925 04:13:54.728668    3959 start.go:902] validating driver "qemu2" against <nil>
	I0925 04:13:54.728674    3959 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0925 04:13:54.730718    3959 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0925 04:13:54.733765    3959 out.go:177] * Automatically selected the socket_vmnet network
	I0925 04:13:54.736783    3959 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0925 04:13:54.736806    3959 cni.go:84] Creating CNI manager for ""
	I0925 04:13:54.736813    3959 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0925 04:13:54.736822    3959 start_flags.go:321] config:
	{Name:ingress-addon-legacy-907000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-907000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0925 04:13:54.741250    3959 iso.go:125] acquiring lock: {Name:mkf881a60cf9fd1672567914305ff6f7a4f13809 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 04:13:54.748715    3959 out.go:177] * Starting control plane node ingress-addon-legacy-907000 in cluster ingress-addon-legacy-907000
	I0925 04:13:54.752683    3959 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0925 04:13:54.807097    3959 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4
	I0925 04:13:54.807120    3959 cache.go:57] Caching tarball of preloaded images
	I0925 04:13:54.807292    3959 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0925 04:13:54.811760    3959 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0925 04:13:54.819657    3959 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I0925 04:13:54.901909    3959 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4?checksum=md5:c8c260b886393123ce9d312d8ac2379e -> /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4
	I0925 04:14:00.499557    3959 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I0925 04:14:00.499696    3959 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I0925 04:14:01.248425    3959 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on docker
	I0925 04:14:01.248617    3959 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/ingress-addon-legacy-907000/config.json ...
	I0925 04:14:01.248639    3959 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/ingress-addon-legacy-907000/config.json: {Name:mk5d112d254a4c65593428b73eb3d02fdf7f7d03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 04:14:01.248869    3959 start.go:365] acquiring machines lock for ingress-addon-legacy-907000: {Name:mk918d99818e2cf21e5912bc291ed18d4b442ba6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 04:14:01.248901    3959 start.go:369] acquired machines lock for "ingress-addon-legacy-907000" in 20.958µs
	I0925 04:14:01.248909    3959 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-907000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 K
ubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-907000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:doc
ker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 04:14:01.248939    3959 start.go:125] createHost starting for "" (driver="qemu2")
	I0925 04:14:01.256772    3959 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I0925 04:14:01.271290    3959 start.go:159] libmachine.API.Create for "ingress-addon-legacy-907000" (driver="qemu2")
	I0925 04:14:01.271309    3959 client.go:168] LocalClient.Create starting
	I0925 04:14:01.271387    3959 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/ca.pem
	I0925 04:14:01.271410    3959 main.go:141] libmachine: Decoding PEM data...
	I0925 04:14:01.271425    3959 main.go:141] libmachine: Parsing certificate...
	I0925 04:14:01.271467    3959 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/cert.pem
	I0925 04:14:01.271484    3959 main.go:141] libmachine: Decoding PEM data...
	I0925 04:14:01.271496    3959 main.go:141] libmachine: Parsing certificate...
	I0925 04:14:01.271835    3959 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17297-1010/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I0925 04:14:01.459782    3959 main.go:141] libmachine: Creating SSH key...
	I0925 04:14:01.560443    3959 main.go:141] libmachine: Creating Disk image...
	I0925 04:14:01.560448    3959 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0925 04:14:01.560589    3959 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/ingress-addon-legacy-907000/disk.qcow2.raw /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/ingress-addon-legacy-907000/disk.qcow2
	I0925 04:14:01.569047    3959 main.go:141] libmachine: STDOUT: 
	I0925 04:14:01.569071    3959 main.go:141] libmachine: STDERR: 
	I0925 04:14:01.569135    3959 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/ingress-addon-legacy-907000/disk.qcow2 +20000M
	I0925 04:14:01.576390    3959 main.go:141] libmachine: STDOUT: Image resized.
	
	I0925 04:14:01.576404    3959 main.go:141] libmachine: STDERR: 
	I0925 04:14:01.576428    3959 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/ingress-addon-legacy-907000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/ingress-addon-legacy-907000/disk.qcow2
	I0925 04:14:01.576436    3959 main.go:141] libmachine: Starting QEMU VM...
	I0925 04:14:01.576477    3959 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4096 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/ingress-addon-legacy-907000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/ingress-addon-legacy-907000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/ingress-addon-legacy-907000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:43:28:83:73:c6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/ingress-addon-legacy-907000/disk.qcow2
	I0925 04:14:01.610754    3959 main.go:141] libmachine: STDOUT: 
	I0925 04:14:01.610784    3959 main.go:141] libmachine: STDERR: 
	I0925 04:14:01.610788    3959 main.go:141] libmachine: Attempt 0
	I0925 04:14:01.610807    3959 main.go:141] libmachine: Searching for 92:43:28:83:73:c6 in /var/db/dhcpd_leases ...
	I0925 04:14:01.610878    3959 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0925 04:14:01.610898    3959 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:c2:cd:eb:d5:3a:6a ID:1,c2:cd:eb:d5:3a:6a Lease:0x6512bcdb}
	I0925 04:14:01.610912    3959 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:7a:1a:96:ca:e:57 ID:1,7a:1a:96:ca:e:57 Lease:0x6512bb69}
	I0925 04:14:01.610920    3959 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:46:8b:64:9b:3d:9c ID:1,46:8b:64:9b:3d:9c Lease:0x651169dd}
	I0925 04:14:01.610925    3959 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:4e:70:b3:50:3d:bc ID:1,4e:70:b3:50:3d:bc Lease:0x651169b8}
	I0925 04:14:03.613145    3959 main.go:141] libmachine: Attempt 1
	I0925 04:14:03.613221    3959 main.go:141] libmachine: Searching for 92:43:28:83:73:c6 in /var/db/dhcpd_leases ...
	I0925 04:14:03.613509    3959 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0925 04:14:03.613561    3959 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:c2:cd:eb:d5:3a:6a ID:1,c2:cd:eb:d5:3a:6a Lease:0x6512bcdb}
	I0925 04:14:03.613638    3959 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:7a:1a:96:ca:e:57 ID:1,7a:1a:96:ca:e:57 Lease:0x6512bb69}
	I0925 04:14:03.613670    3959 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:46:8b:64:9b:3d:9c ID:1,46:8b:64:9b:3d:9c Lease:0x651169dd}
	I0925 04:14:03.613698    3959 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:4e:70:b3:50:3d:bc ID:1,4e:70:b3:50:3d:bc Lease:0x651169b8}
	I0925 04:14:05.614306    3959 main.go:141] libmachine: Attempt 2
	I0925 04:14:05.614365    3959 main.go:141] libmachine: Searching for 92:43:28:83:73:c6 in /var/db/dhcpd_leases ...
	I0925 04:14:05.614484    3959 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0925 04:14:05.614497    3959 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:c2:cd:eb:d5:3a:6a ID:1,c2:cd:eb:d5:3a:6a Lease:0x6512bcdb}
	I0925 04:14:05.614506    3959 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:7a:1a:96:ca:e:57 ID:1,7a:1a:96:ca:e:57 Lease:0x6512bb69}
	I0925 04:14:05.614512    3959 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:46:8b:64:9b:3d:9c ID:1,46:8b:64:9b:3d:9c Lease:0x651169dd}
	I0925 04:14:05.614517    3959 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:4e:70:b3:50:3d:bc ID:1,4e:70:b3:50:3d:bc Lease:0x651169b8}
	I0925 04:14:07.616588    3959 main.go:141] libmachine: Attempt 3
	I0925 04:14:07.616612    3959 main.go:141] libmachine: Searching for 92:43:28:83:73:c6 in /var/db/dhcpd_leases ...
	I0925 04:14:07.616657    3959 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0925 04:14:07.616666    3959 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:c2:cd:eb:d5:3a:6a ID:1,c2:cd:eb:d5:3a:6a Lease:0x6512bcdb}
	I0925 04:14:07.616672    3959 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:7a:1a:96:ca:e:57 ID:1,7a:1a:96:ca:e:57 Lease:0x6512bb69}
	I0925 04:14:07.616677    3959 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:46:8b:64:9b:3d:9c ID:1,46:8b:64:9b:3d:9c Lease:0x651169dd}
	I0925 04:14:07.616683    3959 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:4e:70:b3:50:3d:bc ID:1,4e:70:b3:50:3d:bc Lease:0x651169b8}
	I0925 04:14:09.618776    3959 main.go:141] libmachine: Attempt 4
	I0925 04:14:09.618801    3959 main.go:141] libmachine: Searching for 92:43:28:83:73:c6 in /var/db/dhcpd_leases ...
	I0925 04:14:09.618894    3959 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0925 04:14:09.618907    3959 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:c2:cd:eb:d5:3a:6a ID:1,c2:cd:eb:d5:3a:6a Lease:0x6512bcdb}
	I0925 04:14:09.618912    3959 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:7a:1a:96:ca:e:57 ID:1,7a:1a:96:ca:e:57 Lease:0x6512bb69}
	I0925 04:14:09.618918    3959 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:46:8b:64:9b:3d:9c ID:1,46:8b:64:9b:3d:9c Lease:0x651169dd}
	I0925 04:14:09.618923    3959 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:4e:70:b3:50:3d:bc ID:1,4e:70:b3:50:3d:bc Lease:0x651169b8}
	I0925 04:14:11.621041    3959 main.go:141] libmachine: Attempt 5
	I0925 04:14:11.621062    3959 main.go:141] libmachine: Searching for 92:43:28:83:73:c6 in /var/db/dhcpd_leases ...
	I0925 04:14:11.621139    3959 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0925 04:14:11.621159    3959 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:c2:cd:eb:d5:3a:6a ID:1,c2:cd:eb:d5:3a:6a Lease:0x6512bcdb}
	I0925 04:14:11.621164    3959 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:7a:1a:96:ca:e:57 ID:1,7a:1a:96:ca:e:57 Lease:0x6512bb69}
	I0925 04:14:11.621170    3959 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:46:8b:64:9b:3d:9c ID:1,46:8b:64:9b:3d:9c Lease:0x651169dd}
	I0925 04:14:11.621175    3959 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:4e:70:b3:50:3d:bc ID:1,4e:70:b3:50:3d:bc Lease:0x651169b8}
	I0925 04:14:13.623285    3959 main.go:141] libmachine: Attempt 6
	I0925 04:14:13.623331    3959 main.go:141] libmachine: Searching for 92:43:28:83:73:c6 in /var/db/dhcpd_leases ...
	I0925 04:14:13.623477    3959 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I0925 04:14:13.623490    3959 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:92:43:28:83:73:c6 ID:1,92:43:28:83:73:c6 Lease:0x6512bd04}
	I0925 04:14:13.623494    3959 main.go:141] libmachine: Found match: 92:43:28:83:73:c6
	I0925 04:14:13.623510    3959 main.go:141] libmachine: IP: 192.168.105.6
	I0925 04:14:13.623520    3959 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.6)...
	I0925 04:14:15.631207    3959 machine.go:88] provisioning docker machine ...
	I0925 04:14:15.631233    3959 buildroot.go:166] provisioning hostname "ingress-addon-legacy-907000"
	I0925 04:14:15.631281    3959 main.go:141] libmachine: Using SSH client type: native
	I0925 04:14:15.631564    3959 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1005f0760] 0x1005f2ed0 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0925 04:14:15.631570    3959 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-907000 && echo "ingress-addon-legacy-907000" | sudo tee /etc/hostname
	I0925 04:14:15.685214    3959 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-907000
	
	I0925 04:14:15.685261    3959 main.go:141] libmachine: Using SSH client type: native
	I0925 04:14:15.685493    3959 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1005f0760] 0x1005f2ed0 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0925 04:14:15.685503    3959 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-907000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-907000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-907000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0925 04:14:15.739388    3959 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0925 04:14:15.739401    3959 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17297-1010/.minikube CaCertPath:/Users/jenkins/minikube-integration/17297-1010/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17297-1010/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17297-1010/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17297-1010/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17297-1010/.minikube}
	I0925 04:14:15.739409    3959 buildroot.go:174] setting up certificates
	I0925 04:14:15.739417    3959 provision.go:83] configureAuth start
	I0925 04:14:15.739424    3959 provision.go:138] copyHostCerts
	I0925 04:14:15.739449    3959 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/17297-1010/.minikube/ca.pem
	I0925 04:14:15.739485    3959 exec_runner.go:144] found /Users/jenkins/minikube-integration/17297-1010/.minikube/ca.pem, removing ...
	I0925 04:14:15.739491    3959 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17297-1010/.minikube/ca.pem
	I0925 04:14:15.739600    3959 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17297-1010/.minikube/ca.pem (1082 bytes)
	I0925 04:14:15.739745    3959 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/17297-1010/.minikube/cert.pem
	I0925 04:14:15.739769    3959 exec_runner.go:144] found /Users/jenkins/minikube-integration/17297-1010/.minikube/cert.pem, removing ...
	I0925 04:14:15.739771    3959 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17297-1010/.minikube/cert.pem
	I0925 04:14:15.739815    3959 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17297-1010/.minikube/cert.pem (1123 bytes)
	I0925 04:14:15.739890    3959 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/17297-1010/.minikube/key.pem
	I0925 04:14:15.739912    3959 exec_runner.go:144] found /Users/jenkins/minikube-integration/17297-1010/.minikube/key.pem, removing ...
	I0925 04:14:15.739915    3959 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17297-1010/.minikube/key.pem
	I0925 04:14:15.739954    3959 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17297-1010/.minikube/key.pem (1679 bytes)
	I0925 04:14:15.740025    3959 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17297-1010/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17297-1010/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-907000 san=[192.168.105.6 192.168.105.6 localhost 127.0.0.1 minikube ingress-addon-legacy-907000]
	I0925 04:14:15.874866    3959 provision.go:172] copyRemoteCerts
	I0925 04:14:15.874902    3959 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0925 04:14:15.874910    3959 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/ingress-addon-legacy-907000/id_rsa Username:docker}
	I0925 04:14:15.902582    3959 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0925 04:14:15.902639    3959 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0925 04:14:15.909474    3959 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0925 04:14:15.909517    3959 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0925 04:14:15.916188    3959 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0925 04:14:15.916226    3959 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0925 04:14:15.923375    3959 provision.go:86] duration metric: configureAuth took 183.95ms
	I0925 04:14:15.923384    3959 buildroot.go:189] setting minikube options for container-runtime
	I0925 04:14:15.923488    3959 config.go:182] Loaded profile config "ingress-addon-legacy-907000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0925 04:14:15.923526    3959 main.go:141] libmachine: Using SSH client type: native
	I0925 04:14:15.923752    3959 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1005f0760] 0x1005f2ed0 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0925 04:14:15.923757    3959 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0925 04:14:15.974222    3959 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0925 04:14:15.974230    3959 buildroot.go:70] root file system type: tmpfs
	I0925 04:14:15.974283    3959 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0925 04:14:15.974329    3959 main.go:141] libmachine: Using SSH client type: native
	I0925 04:14:15.974551    3959 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1005f0760] 0x1005f2ed0 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0925 04:14:15.974583    3959 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0925 04:14:16.029711    3959 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0925 04:14:16.029767    3959 main.go:141] libmachine: Using SSH client type: native
	I0925 04:14:16.030007    3959 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1005f0760] 0x1005f2ed0 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0925 04:14:16.030016    3959 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0925 04:14:16.373685    3959 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0925 04:14:16.373698    3959 machine.go:91] provisioned docker machine in 742.479ms
	I0925 04:14:16.373704    3959 client.go:171] LocalClient.Create took 15.102379875s
	I0925 04:14:16.373721    3959 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-907000" took 15.102425459s
	I0925 04:14:16.373727    3959 start.go:300] post-start starting for "ingress-addon-legacy-907000" (driver="qemu2")
	I0925 04:14:16.373732    3959 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0925 04:14:16.373808    3959 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0925 04:14:16.373817    3959 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/ingress-addon-legacy-907000/id_rsa Username:docker}
	I0925 04:14:16.399664    3959 ssh_runner.go:195] Run: cat /etc/os-release
	I0925 04:14:16.400916    3959 info.go:137] Remote host: Buildroot 2021.02.12
	I0925 04:14:16.400922    3959 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17297-1010/.minikube/addons for local assets ...
	I0925 04:14:16.400991    3959 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17297-1010/.minikube/files for local assets ...
	I0925 04:14:16.401091    3959 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17297-1010/.minikube/files/etc/ssl/certs/14692.pem -> 14692.pem in /etc/ssl/certs
	I0925 04:14:16.401096    3959 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17297-1010/.minikube/files/etc/ssl/certs/14692.pem -> /etc/ssl/certs/14692.pem
	I0925 04:14:16.401205    3959 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0925 04:14:16.403763    3959 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1010/.minikube/files/etc/ssl/certs/14692.pem --> /etc/ssl/certs/14692.pem (1708 bytes)
	I0925 04:14:16.410715    3959 start.go:303] post-start completed in 36.984ms
	I0925 04:14:16.411090    3959 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/ingress-addon-legacy-907000/config.json ...
	I0925 04:14:16.411254    3959 start.go:128] duration metric: createHost completed in 15.162299458s
	I0925 04:14:16.411279    3959 main.go:141] libmachine: Using SSH client type: native
	I0925 04:14:16.411495    3959 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1005f0760] 0x1005f2ed0 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0925 04:14:16.411500    3959 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0925 04:14:16.460996    3959 main.go:141] libmachine: SSH cmd err, output: <nil>: 1695640456.474207835
	
	I0925 04:14:16.461003    3959 fix.go:206] guest clock: 1695640456.474207835
	I0925 04:14:16.461008    3959 fix.go:219] Guest: 2023-09-25 04:14:16.474207835 -0700 PDT Remote: 2023-09-25 04:14:16.411256 -0700 PDT m=+21.758298126 (delta=62.951835ms)
	I0925 04:14:16.461017    3959 fix.go:190] guest clock delta is within tolerance: 62.951835ms
	I0925 04:14:16.461020    3959 start.go:83] releasing machines lock for "ingress-addon-legacy-907000", held for 15.212103625s
	I0925 04:14:16.461324    3959 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0925 04:14:16.461343    3959 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/ingress-addon-legacy-907000/id_rsa Username:docker}
	I0925 04:14:16.461324    3959 ssh_runner.go:195] Run: cat /version.json
	I0925 04:14:16.461362    3959 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/ingress-addon-legacy-907000/id_rsa Username:docker}
	I0925 04:14:16.485921    3959 ssh_runner.go:195] Run: systemctl --version
	I0925 04:14:16.488015    3959 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0925 04:14:16.490433    3959 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0925 04:14:16.490465    3959 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0925 04:14:16.538058    3959 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0925 04:14:16.543396    3959 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0925 04:14:16.543405    3959 start.go:469] detecting cgroup driver to use...
	I0925 04:14:16.543486    3959 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0925 04:14:16.550712    3959 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0925 04:14:16.553830    3959 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0925 04:14:16.556727    3959 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0925 04:14:16.556748    3959 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0925 04:14:16.559959    3959 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0925 04:14:16.563374    3959 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0925 04:14:16.566810    3959 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0925 04:14:16.570001    3959 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0925 04:14:16.572884    3959 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0925 04:14:16.576069    3959 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0925 04:14:16.579401    3959 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0925 04:14:16.582280    3959 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0925 04:14:16.650325    3959 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0925 04:14:16.658601    3959 start.go:469] detecting cgroup driver to use...
	I0925 04:14:16.658665    3959 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0925 04:14:16.663721    3959 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0925 04:14:16.669265    3959 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0925 04:14:16.676703    3959 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0925 04:14:16.681386    3959 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0925 04:14:16.686026    3959 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0925 04:14:16.732077    3959 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0925 04:14:16.737386    3959 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0925 04:14:16.742995    3959 ssh_runner.go:195] Run: which cri-dockerd
	I0925 04:14:16.744266    3959 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0925 04:14:16.746834    3959 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0925 04:14:16.751945    3959 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0925 04:14:16.824397    3959 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0925 04:14:16.906814    3959 docker.go:554] configuring docker to use "cgroupfs" as cgroup driver...
	I0925 04:14:16.906884    3959 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0925 04:14:16.912088    3959 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0925 04:14:16.991222    3959 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0925 04:14:18.156459    3959 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.165217792s)
	I0925 04:14:18.156529    3959 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0925 04:14:18.168587    3959 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0925 04:14:18.186866    3959 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 24.0.6 ...
	I0925 04:14:18.186963    3959 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0925 04:14:18.188507    3959 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0925 04:14:18.192300    3959 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0925 04:14:18.192341    3959 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0925 04:14:18.197877    3959 docker.go:664] Got preloaded images: 
	I0925 04:14:18.197889    3959 docker.go:670] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I0925 04:14:18.197932    3959 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0925 04:14:18.200606    3959 ssh_runner.go:195] Run: which lz4
	I0925 04:14:18.201863    3959 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 -> /preloaded.tar.lz4
	I0925 04:14:18.201956    3959 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0925 04:14:18.203072    3959 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0925 04:14:18.203086    3959 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (459739018 bytes)
	I0925 04:14:19.892184    3959 docker.go:628] Took 1.690268 seconds to copy over tarball
	I0925 04:14:19.892256    3959 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0925 04:14:21.190283    3959 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.298009917s)
	I0925 04:14:21.190297    3959 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0925 04:14:21.214775    3959 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0925 04:14:21.222785    3959 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2502 bytes)
	I0925 04:14:21.231945    3959 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0925 04:14:21.307810    3959 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0925 04:14:23.009681    3959 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.701850375s)
	I0925 04:14:23.009791    3959 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0925 04:14:23.015751    3959 docker.go:664] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0925 04:14:23.015761    3959 docker.go:670] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I0925 04:14:23.015765    3959 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0925 04:14:23.023421    3959 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0925 04:14:23.023477    3959 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0925 04:14:23.023513    3959 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0925 04:14:23.023882    3959 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0925 04:14:23.023918    3959 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0925 04:14:23.024710    3959 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0925 04:14:23.024751    3959 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0925 04:14:23.029250    3959 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0925 04:14:23.033848    3959 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I0925 04:14:23.033935    3959 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0925 04:14:23.035155    3959 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0925 04:14:23.035249    3959 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I0925 04:14:23.035280    3959 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0925 04:14:23.035325    3959 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0925 04:14:23.035643    3959 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0925 04:14:23.037368    3959 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	W0925 04:14:23.567477    3959 image.go:265] image registry.k8s.io/kube-apiserver:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0925 04:14:23.567592    3959 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I0925 04:14:23.574035    3959 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "d353007847ec85700463981309a5846c8d9c93fbcd1323104266212926d68257" in container runtime
	I0925 04:14:23.574060    3959 docker.go:317] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0925 04:14:23.574104    3959 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0925 04:14:23.580460    3959 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.18.20
	W0925 04:14:23.634215    3959 image.go:265] image registry.k8s.io/kube-proxy:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0925 04:14:23.634373    3959 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I0925 04:14:23.640247    3959 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "b11cdc97ac6ac4ef2b3b0662edbe16597084b17cbc8e3d61fcaf4ef827a7ed18" in container runtime
	I0925 04:14:23.640268    3959 docker.go:317] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0925 04:14:23.640318    3959 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.18.20
	I0925 04:14:23.645474    3959 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.18.20
	W0925 04:14:24.059397    3959 image.go:265] image registry.k8s.io/kube-scheduler:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0925 04:14:24.059511    3959 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I0925 04:14:24.067676    3959 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "177548d745cb87f773d02f41d453af2f2a1479dbe3c32e749cf6d8145c005e79" in container runtime
	I0925 04:14:24.067704    3959 docker.go:317] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0925 04:14:24.067754    3959 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0925 04:14:24.073460    3959 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.18.20
	W0925 04:14:24.254952    3959 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0925 04:14:24.255074    3959 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0925 04:14:24.265309    3959 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0925 04:14:24.265331    3959 docker.go:317] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0925 04:14:24.265370    3959 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0925 04:14:24.275571    3959 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	W0925 04:14:24.291100    3959 image.go:265] image registry.k8s.io/kube-controller-manager:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0925 04:14:24.291198    3959 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I0925 04:14:24.297442    3959 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "297c79afbdb81ceb4cf857e0c54a0de7b6ce7ebe01e6cab68fc8baf342be3ea7" in container runtime
	I0925 04:14:24.297469    3959 docker.go:317] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0925 04:14:24.297509    3959 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0925 04:14:24.303472    3959 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.18.20
	I0925 04:14:24.475462    3959 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0925 04:14:24.481808    3959 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "2a060e2e7101d419352bf82c613158587400be743482d9a537ec4a9d1b4eb93c" in container runtime
	I0925 04:14:24.481837    3959 docker.go:317] Removing image: registry.k8s.io/pause:3.2
	I0925 04:14:24.481885    3959 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.2
	I0925 04:14:24.488177    3959 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	W0925 04:14:24.681722    3959 image.go:265] image registry.k8s.io/etcd:3.4.3-0 arch mismatch: want arm64 got amd64. fixing
	I0925 04:14:24.681859    3959 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I0925 04:14:24.688201    3959 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "29dd247b2572efbe28fcaea3fef1c5d72593da59f7350e3f6d2e6618983f9c03" in container runtime
	I0925 04:14:24.688230    3959 docker.go:317] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0925 04:14:24.688276    3959 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.4.3-0
	I0925 04:14:24.694417    3959 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.3-0
	W0925 04:14:24.892460    3959 image.go:265] image registry.k8s.io/coredns:1.6.7 arch mismatch: want arm64 got amd64. fixing
	I0925 04:14:24.892956    3959 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I0925 04:14:24.911210    3959 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "ff3af22d8878afc6985d3fec3e066d00ef431aa166c3a01ac58f1990adc92a2c" in container runtime
	I0925 04:14:24.911255    3959 docker.go:317] Removing image: registry.k8s.io/coredns:1.6.7
	I0925 04:14:24.911343    3959 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.7
	I0925 04:14:24.924634    3959 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.6.7
	I0925 04:14:24.924698    3959 cache_images.go:92] LoadImages completed in 1.908924s
	W0925 04:14:24.924782    3959 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.18.20: no such file or directory
	I0925 04:14:24.924880    3959 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0925 04:14:24.938682    3959 cni.go:84] Creating CNI manager for ""
	I0925 04:14:24.938701    3959 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0925 04:14:24.938720    3959 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0925 04:14:24.938734    3959 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.6 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-907000 NodeName:ingress-addon-legacy-907000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.6"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.6 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0925 04:14:24.938880    3959 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.6
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "ingress-addon-legacy-907000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.6
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.6"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0925 04:14:24.938950    3959 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-907000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-907000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0925 04:14:24.939032    3959 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0925 04:14:24.943997    3959 binaries.go:44] Found k8s binaries, skipping transfer
	I0925 04:14:24.944051    3959 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0925 04:14:24.947971    3959 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (355 bytes)
	I0925 04:14:24.954669    3959 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0925 04:14:24.960553    3959 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2127 bytes)
	I0925 04:14:24.966116    3959 ssh_runner.go:195] Run: grep 192.168.105.6	control-plane.minikube.internal$ /etc/hosts
	I0925 04:14:24.967422    3959 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.6	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0925 04:14:24.971575    3959 certs.go:56] Setting up /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/ingress-addon-legacy-907000 for IP: 192.168.105.6
	I0925 04:14:24.971590    3959 certs.go:190] acquiring lock for shared ca certs: {Name:mk095b03680bcdeba6c321a9f458c9fbafa67639 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 04:14:24.971935    3959 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/17297-1010/.minikube/ca.key
	I0925 04:14:24.972090    3959 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/17297-1010/.minikube/proxy-client-ca.key
	I0925 04:14:24.972116    3959 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/ingress-addon-legacy-907000/client.key
	I0925 04:14:24.972124    3959 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/ingress-addon-legacy-907000/client.crt with IP's: []
	I0925 04:14:25.068325    3959 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/ingress-addon-legacy-907000/client.crt ...
	I0925 04:14:25.068331    3959 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/ingress-addon-legacy-907000/client.crt: {Name:mk257358df151983d34c6e1a0e9aa9e348954276 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 04:14:25.068525    3959 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/ingress-addon-legacy-907000/client.key ...
	I0925 04:14:25.068529    3959 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/ingress-addon-legacy-907000/client.key: {Name:mk0d6a618e21121460f39e0ef0a75ced30b10600 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 04:14:25.068645    3959 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/ingress-addon-legacy-907000/apiserver.key.b354f644
	I0925 04:14:25.068651    3959 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/ingress-addon-legacy-907000/apiserver.crt.b354f644 with IP's: [192.168.105.6 10.96.0.1 127.0.0.1 10.0.0.1]
	I0925 04:14:25.142519    3959 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/ingress-addon-legacy-907000/apiserver.crt.b354f644 ...
	I0925 04:14:25.142523    3959 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/ingress-addon-legacy-907000/apiserver.crt.b354f644: {Name:mk2914a54b7b974dce63262fba9fdf0bf5ca547f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 04:14:25.142659    3959 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/ingress-addon-legacy-907000/apiserver.key.b354f644 ...
	I0925 04:14:25.142662    3959 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/ingress-addon-legacy-907000/apiserver.key.b354f644: {Name:mk69db73088e59def02362905e5008c85a6d5cea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 04:14:25.142764    3959 certs.go:337] copying /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/ingress-addon-legacy-907000/apiserver.crt.b354f644 -> /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/ingress-addon-legacy-907000/apiserver.crt
	I0925 04:14:25.142909    3959 certs.go:341] copying /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/ingress-addon-legacy-907000/apiserver.key.b354f644 -> /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/ingress-addon-legacy-907000/apiserver.key
	I0925 04:14:25.143020    3959 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/ingress-addon-legacy-907000/proxy-client.key
	I0925 04:14:25.143027    3959 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/ingress-addon-legacy-907000/proxy-client.crt with IP's: []
	I0925 04:14:25.172083    3959 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/ingress-addon-legacy-907000/proxy-client.crt ...
	I0925 04:14:25.172086    3959 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/ingress-addon-legacy-907000/proxy-client.crt: {Name:mkd9cc1c188325d788b5b9ddbab860c4e284e14b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 04:14:25.172210    3959 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/ingress-addon-legacy-907000/proxy-client.key ...
	I0925 04:14:25.172213    3959 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/ingress-addon-legacy-907000/proxy-client.key: {Name:mkd902d550ce60249b07bd86d3b6f23397cda933 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 04:14:25.172319    3959 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/ingress-addon-legacy-907000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0925 04:14:25.172337    3959 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/ingress-addon-legacy-907000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0925 04:14:25.172349    3959 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/ingress-addon-legacy-907000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0925 04:14:25.172361    3959 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/ingress-addon-legacy-907000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0925 04:14:25.172373    3959 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17297-1010/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0925 04:14:25.172384    3959 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17297-1010/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0925 04:14:25.172395    3959 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17297-1010/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0925 04:14:25.172407    3959 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17297-1010/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0925 04:14:25.172485    3959 certs.go:437] found cert: /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/Users/jenkins/minikube-integration/17297-1010/.minikube/certs/1469.pem (1338 bytes)
	W0925 04:14:25.172700    3959 certs.go:433] ignoring /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/Users/jenkins/minikube-integration/17297-1010/.minikube/certs/1469_empty.pem, impossibly tiny 0 bytes
	I0925 04:14:25.172712    3959 certs.go:437] found cert: /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/Users/jenkins/minikube-integration/17297-1010/.minikube/certs/ca-key.pem (1675 bytes)
	I0925 04:14:25.172737    3959 certs.go:437] found cert: /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/Users/jenkins/minikube-integration/17297-1010/.minikube/certs/ca.pem (1082 bytes)
	I0925 04:14:25.172762    3959 certs.go:437] found cert: /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/Users/jenkins/minikube-integration/17297-1010/.minikube/certs/cert.pem (1123 bytes)
	I0925 04:14:25.172783    3959 certs.go:437] found cert: /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/Users/jenkins/minikube-integration/17297-1010/.minikube/certs/key.pem (1679 bytes)
	I0925 04:14:25.172832    3959 certs.go:437] found cert: /Users/jenkins/minikube-integration/17297-1010/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/17297-1010/.minikube/files/etc/ssl/certs/14692.pem (1708 bytes)
	I0925 04:14:25.172853    3959 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/1469.pem -> /usr/share/ca-certificates/1469.pem
	I0925 04:14:25.172863    3959 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17297-1010/.minikube/files/etc/ssl/certs/14692.pem -> /usr/share/ca-certificates/14692.pem
	I0925 04:14:25.172876    3959 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17297-1010/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0925 04:14:25.173173    3959 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/ingress-addon-legacy-907000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0925 04:14:25.180155    3959 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/ingress-addon-legacy-907000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0925 04:14:25.187035    3959 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/ingress-addon-legacy-907000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0925 04:14:25.194034    3959 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/ingress-addon-legacy-907000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0925 04:14:25.200906    3959 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1010/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0925 04:14:25.207901    3959 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1010/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0925 04:14:25.215415    3959 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1010/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0925 04:14:25.222699    3959 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1010/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0925 04:14:25.229557    3959 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/1469.pem --> /usr/share/ca-certificates/1469.pem (1338 bytes)
	I0925 04:14:25.236196    3959 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1010/.minikube/files/etc/ssl/certs/14692.pem --> /usr/share/ca-certificates/14692.pem (1708 bytes)
	I0925 04:14:25.243342    3959 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1010/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0925 04:14:25.250553    3959 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0925 04:14:25.255644    3959 ssh_runner.go:195] Run: openssl version
	I0925 04:14:25.257758    3959 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1469.pem && ln -fs /usr/share/ca-certificates/1469.pem /etc/ssl/certs/1469.pem"
	I0925 04:14:25.260708    3959 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1469.pem
	I0925 04:14:25.262216    3959 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 25 11:07 /usr/share/ca-certificates/1469.pem
	I0925 04:14:25.262239    3959 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1469.pem
	I0925 04:14:25.264081    3959 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1469.pem /etc/ssl/certs/51391683.0"
	I0925 04:14:25.267626    3959 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14692.pem && ln -fs /usr/share/ca-certificates/14692.pem /etc/ssl/certs/14692.pem"
	I0925 04:14:25.271152    3959 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14692.pem
	I0925 04:14:25.272722    3959 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 25 11:07 /usr/share/ca-certificates/14692.pem
	I0925 04:14:25.272741    3959 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14692.pem
	I0925 04:14:25.274498    3959 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14692.pem /etc/ssl/certs/3ec20f2e.0"
	I0925 04:14:25.277464    3959 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0925 04:14:25.280473    3959 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0925 04:14:25.282275    3959 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 25 10:34 /usr/share/ca-certificates/minikubeCA.pem
	I0925 04:14:25.282301    3959 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0925 04:14:25.284178    3959 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0925 04:14:25.287568    3959 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0925 04:14:25.288923    3959 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0925 04:14:25.288957    3959 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-907000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.18.20 ClusterName:ingress-addon-legacy-907000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.6 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0925 04:14:25.289020    3959 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0925 04:14:25.294344    3959 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0925 04:14:25.297155    3959 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0925 04:14:25.300176    3959 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0925 04:14:25.303327    3959 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0925 04:14:25.303347    3959 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0925 04:14:25.330412    3959 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0925 04:14:25.330440    3959 kubeadm.go:322] [preflight] Running pre-flight checks
	I0925 04:14:25.412490    3959 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0925 04:14:25.412548    3959 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0925 04:14:25.412645    3959 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0925 04:14:25.461269    3959 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0925 04:14:25.462977    3959 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0925 04:14:25.463000    3959 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0925 04:14:25.549307    3959 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0925 04:14:25.556469    3959 out.go:204]   - Generating certificates and keys ...
	I0925 04:14:25.556525    3959 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0925 04:14:25.556554    3959 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0925 04:14:25.690945    3959 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0925 04:14:25.782025    3959 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0925 04:14:25.832480    3959 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0925 04:14:25.880300    3959 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0925 04:14:25.953108    3959 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0925 04:14:25.953176    3959 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-907000 localhost] and IPs [192.168.105.6 127.0.0.1 ::1]
	I0925 04:14:26.057169    3959 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0925 04:14:26.057245    3959 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-907000 localhost] and IPs [192.168.105.6 127.0.0.1 ::1]
	I0925 04:14:26.201013    3959 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0925 04:14:26.334458    3959 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0925 04:14:26.501288    3959 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0925 04:14:26.501317    3959 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0925 04:14:26.763804    3959 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0925 04:14:26.863419    3959 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0925 04:14:26.915353    3959 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0925 04:14:27.049795    3959 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0925 04:14:27.050004    3959 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0925 04:14:27.053126    3959 out.go:204]   - Booting up control plane ...
	I0925 04:14:27.053172    3959 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0925 04:14:27.056972    3959 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0925 04:14:27.057684    3959 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0925 04:14:27.061111    3959 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0925 04:14:27.061230    3959 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0925 04:14:38.569712    3959 kubeadm.go:322] [apiclient] All control plane components are healthy after 11.507868 seconds
	I0925 04:14:38.570033    3959 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0925 04:14:38.595375    3959 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I0925 04:14:39.114956    3959 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0925 04:14:39.115103    3959 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-907000 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0925 04:14:39.621929    3959 kubeadm.go:322] [bootstrap-token] Using token: ienysa.3sg9bqbj7myoieus
	I0925 04:14:39.625547    3959 out.go:204]   - Configuring RBAC rules ...
	I0925 04:14:39.625662    3959 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0925 04:14:39.627027    3959 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0925 04:14:39.632789    3959 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0925 04:14:39.634255    3959 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0925 04:14:39.635832    3959 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0925 04:14:39.637389    3959 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0925 04:14:39.642656    3959 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0925 04:14:39.855246    3959 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0925 04:14:40.028501    3959 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0925 04:14:40.030333    3959 kubeadm.go:322] 
	I0925 04:14:40.030372    3959 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0925 04:14:40.030376    3959 kubeadm.go:322] 
	I0925 04:14:40.030415    3959 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0925 04:14:40.030418    3959 kubeadm.go:322] 
	I0925 04:14:40.030432    3959 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0925 04:14:40.030468    3959 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0925 04:14:40.030509    3959 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0925 04:14:40.030514    3959 kubeadm.go:322] 
	I0925 04:14:40.030561    3959 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0925 04:14:40.030627    3959 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0925 04:14:40.030717    3959 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0925 04:14:40.030724    3959 kubeadm.go:322] 
	I0925 04:14:40.030792    3959 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0925 04:14:40.030853    3959 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0925 04:14:40.030862    3959 kubeadm.go:322] 
	I0925 04:14:40.030954    3959 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token ienysa.3sg9bqbj7myoieus \
	I0925 04:14:40.031034    3959 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:3fc5fb926713648f8638ba10da0d4f45584d32929bcc07af5ada491c000ad47e \
	I0925 04:14:40.031052    3959 kubeadm.go:322]     --control-plane 
	I0925 04:14:40.031058    3959 kubeadm.go:322] 
	I0925 04:14:40.031115    3959 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0925 04:14:40.031127    3959 kubeadm.go:322] 
	I0925 04:14:40.031178    3959 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token ienysa.3sg9bqbj7myoieus \
	I0925 04:14:40.031251    3959 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:3fc5fb926713648f8638ba10da0d4f45584d32929bcc07af5ada491c000ad47e 
	I0925 04:14:40.031756    3959 kubeadm.go:322] W0925 11:14:25.343065    1415 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0925 04:14:40.031850    3959 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0925 04:14:40.031927    3959 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.6. Latest validated version: 19.03
	I0925 04:14:40.031989    3959 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0925 04:14:40.032078    3959 kubeadm.go:322] W0925 11:14:27.070307    1415 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0925 04:14:40.032163    3959 kubeadm.go:322] W0925 11:14:27.072289    1415 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0925 04:14:40.032176    3959 cni.go:84] Creating CNI manager for ""
	I0925 04:14:40.032184    3959 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0925 04:14:40.032198    3959 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0925 04:14:40.032456    3959 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=1bf6c3d5317028f348e55ea19d261973a6487d3c minikube.k8s.io/name=ingress-addon-legacy-907000 minikube.k8s.io/updated_at=2023_09_25T04_14_40_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 04:14:40.032461    3959 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 04:14:40.036177    3959 ops.go:34] apiserver oom_adj: -16
	I0925 04:14:40.158196    3959 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 04:14:40.191923    3959 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 04:14:40.728560    3959 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 04:14:41.228634    3959 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 04:14:41.728592    3959 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 04:14:42.228574    3959 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 04:14:42.728651    3959 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 04:14:43.228634    3959 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 04:14:43.728582    3959 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 04:14:44.228607    3959 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 04:14:44.728609    3959 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 04:14:45.228390    3959 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 04:14:45.728634    3959 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 04:14:46.228583    3959 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 04:14:46.727257    3959 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 04:14:47.228593    3959 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 04:14:47.728308    3959 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 04:14:48.228327    3959 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 04:14:48.728635    3959 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 04:14:49.228590    3959 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 04:14:49.728614    3959 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 04:14:50.228577    3959 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 04:14:50.728631    3959 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 04:14:51.228617    3959 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 04:14:51.728495    3959 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 04:14:52.228618    3959 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 04:14:52.728618    3959 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 04:14:53.228443    3959 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 04:14:53.728543    3959 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 04:14:54.228414    3959 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 04:14:54.728576    3959 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 04:14:55.228369    3959 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 04:14:55.277782    3959 kubeadm.go:1081] duration metric: took 15.245559208s to wait for elevateKubeSystemPrivileges.
	I0925 04:14:55.277798    3959 kubeadm.go:406] StartCluster complete in 29.988822125s
	I0925 04:14:55.277808    3959 settings.go:142] acquiring lock: {Name:mkb5a0822179f07ef9369c44aa9b64eb9ef74eed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 04:14:55.277886    3959 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/17297-1010/kubeconfig
	I0925 04:14:55.278289    3959 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17297-1010/kubeconfig: {Name:mkaa9d09ca2bf27c1a43efc9acf938adcc68343d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 04:14:55.278502    3959 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0925 04:14:55.278602    3959 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0925 04:14:55.278644    3959 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-907000"
	I0925 04:14:55.278653    3959 addons.go:231] Setting addon storage-provisioner=true in "ingress-addon-legacy-907000"
	I0925 04:14:55.278678    3959 host.go:66] Checking if "ingress-addon-legacy-907000" exists ...
	I0925 04:14:55.278673    3959 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-907000"
	I0925 04:14:55.278689    3959 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-907000"
	I0925 04:14:55.278995    3959 kapi.go:59] client config for ingress-addon-legacy-907000: &rest.Config{Host:"https://192.168.105.6:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/ingress-addon-legacy-907000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/ingress-addon-legacy-907000/client.key", CAFile:"/Users/jenkins/minikube-integration/17297-1010/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint
8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1018b65e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0925 04:14:55.279260    3959 config.go:182] Loaded profile config "ingress-addon-legacy-907000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0925 04:14:55.279388    3959 cert_rotation.go:137] Starting client certificate rotation controller
	I0925 04:14:55.280116    3959 kapi.go:59] client config for ingress-addon-legacy-907000: &rest.Config{Host:"https://192.168.105.6:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/ingress-addon-legacy-907000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/ingress-addon-legacy-907000/client.key", CAFile:"/Users/jenkins/minikube-integration/17297-1010/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint
8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1018b65e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0925 04:14:55.284584    3959 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0925 04:14:55.287546    3959 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0925 04:14:55.287553    3959 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0925 04:14:55.287561    3959 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/ingress-addon-legacy-907000/id_rsa Username:docker}
	I0925 04:14:55.290890    3959 addons.go:231] Setting addon default-storageclass=true in "ingress-addon-legacy-907000"
	I0925 04:14:55.290906    3959 host.go:66] Checking if "ingress-addon-legacy-907000" exists ...
	I0925 04:14:55.291609    3959 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0925 04:14:55.291615    3959 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0925 04:14:55.291621    3959 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/ingress-addon-legacy-907000/id_rsa Username:docker}
	I0925 04:14:55.293634    3959 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-907000" context rescaled to 1 replicas
	I0925 04:14:55.293648    3959 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.6 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 04:14:55.297477    3959 out.go:177] * Verifying Kubernetes components...
	I0925 04:14:55.304574    3959 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0925 04:14:55.329533    3959 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0925 04:14:55.329842    3959 kapi.go:59] client config for ingress-addon-legacy-907000: &rest.Config{Host:"https://192.168.105.6:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/ingress-addon-legacy-907000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/ingress-addon-legacy-907000/client.key", CAFile:"/Users/jenkins/minikube-integration/17297-1010/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint
8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1018b65e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0925 04:14:55.329976    3959 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-907000" to be "Ready" ...
	I0925 04:14:55.331379    3959 node_ready.go:49] node "ingress-addon-legacy-907000" has status "Ready":"True"
	I0925 04:14:55.331386    3959 node_ready.go:38] duration metric: took 1.402625ms waiting for node "ingress-addon-legacy-907000" to be "Ready" ...
	I0925 04:14:55.331389    3959 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0925 04:14:55.334116    3959 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-907000" in "kube-system" namespace to be "Ready" ...
	I0925 04:14:55.334736    3959 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0925 04:14:55.335608    3959 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0925 04:14:55.563805    3959 start.go:923] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I0925 04:14:55.586039    3959 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0925 04:14:55.596083    3959 addons.go:502] enable addons completed in 317.479708ms: enabled=[default-storageclass storage-provisioner]
	I0925 04:14:56.342799    3959 pod_ready.go:92] pod "etcd-ingress-addon-legacy-907000" in "kube-system" namespace has status "Ready":"True"
	I0925 04:14:56.342809    3959 pod_ready.go:81] duration metric: took 1.008684416s waiting for pod "etcd-ingress-addon-legacy-907000" in "kube-system" namespace to be "Ready" ...
	I0925 04:14:56.342814    3959 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-907000" in "kube-system" namespace to be "Ready" ...
	I0925 04:14:56.344981    3959 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-907000" in "kube-system" namespace has status "Ready":"True"
	I0925 04:14:56.344987    3959 pod_ready.go:81] duration metric: took 2.169834ms waiting for pod "kube-apiserver-ingress-addon-legacy-907000" in "kube-system" namespace to be "Ready" ...
	I0925 04:14:56.344992    3959 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-907000" in "kube-system" namespace to be "Ready" ...
	I0925 04:14:56.347415    3959 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-907000" in "kube-system" namespace has status "Ready":"True"
	I0925 04:14:56.347428    3959 pod_ready.go:81] duration metric: took 2.433042ms waiting for pod "kube-controller-manager-ingress-addon-legacy-907000" in "kube-system" namespace to be "Ready" ...
	I0925 04:14:56.347433    3959 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-907000" in "kube-system" namespace to be "Ready" ...
	I0925 04:14:56.532078    3959 request.go:629] Waited for 183.533208ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/nodes/ingress-addon-legacy-907000
	I0925 04:14:56.534304    3959 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-907000" in "kube-system" namespace has status "Ready":"True"
	I0925 04:14:56.534312    3959 pod_ready.go:81] duration metric: took 186.873875ms waiting for pod "kube-scheduler-ingress-addon-legacy-907000" in "kube-system" namespace to be "Ready" ...
	I0925 04:14:56.534317    3959 pod_ready.go:38] duration metric: took 1.202921167s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0925 04:14:56.534330    3959 api_server.go:52] waiting for apiserver process to appear ...
	I0925 04:14:56.534402    3959 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0925 04:14:56.547405    3959 api_server.go:72] duration metric: took 1.253742583s to wait for apiserver process to appear ...
	I0925 04:14:56.547417    3959 api_server.go:88] waiting for apiserver healthz status ...
	I0925 04:14:56.547427    3959 api_server.go:253] Checking apiserver healthz at https://192.168.105.6:8443/healthz ...
	I0925 04:14:56.552202    3959 api_server.go:279] https://192.168.105.6:8443/healthz returned 200:
	ok
	I0925 04:14:56.552841    3959 api_server.go:141] control plane version: v1.18.20
	I0925 04:14:56.552854    3959 api_server.go:131] duration metric: took 5.432584ms to wait for apiserver health ...
	I0925 04:14:56.552858    3959 system_pods.go:43] waiting for kube-system pods to appear ...
	I0925 04:14:56.732063    3959 request.go:629] Waited for 179.165ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/namespaces/kube-system/pods
	I0925 04:14:56.737442    3959 system_pods.go:59] 7 kube-system pods found
	I0925 04:14:56.737467    3959 system_pods.go:61] "coredns-66bff467f8-gj5hn" [2bd8985a-3a79-426c-ab66-d433bba606f3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 04:14:56.737475    3959 system_pods.go:61] "etcd-ingress-addon-legacy-907000" [a0529388-b82c-4ec9-b7a7-937d10805a66] Running
	I0925 04:14:56.737482    3959 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-907000" [8da8b993-b1b8-4824-8477-f4d0dc312d76] Running
	I0925 04:14:56.737491    3959 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-907000" [c61e7a36-5ea2-424e-a9fc-fc9d3739a2cc] Running
	I0925 04:14:56.737496    3959 system_pods.go:61] "kube-proxy-hqw2l" [aa432a4d-d79b-4cd9-a274-340f672388e6] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 04:14:56.737501    3959 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-907000" [c3b9cecc-a1e6-41cc-872d-96c135cd32a1] Running
	I0925 04:14:56.737507    3959 system_pods.go:61] "storage-provisioner" [5f1e6569-51c3-4770-93c2-51ed3868828d] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0925 04:14:56.737513    3959 system_pods.go:74] duration metric: took 184.650416ms to wait for pod list to return data ...
	I0925 04:14:56.737521    3959 default_sa.go:34] waiting for default service account to be created ...
	I0925 04:14:56.932172    3959 request.go:629] Waited for 194.545958ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/namespaces/default/serviceaccounts
	I0925 04:14:56.938600    3959 default_sa.go:45] found service account: "default"
	I0925 04:14:56.938640    3959 default_sa.go:55] duration metric: took 201.103375ms for default service account to be created ...
	I0925 04:14:56.938655    3959 system_pods.go:116] waiting for k8s-apps to be running ...
	I0925 04:14:57.132162    3959 request.go:629] Waited for 193.377667ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/namespaces/kube-system/pods
	I0925 04:14:57.145035    3959 system_pods.go:86] 7 kube-system pods found
	I0925 04:14:57.145074    3959 system_pods.go:89] "coredns-66bff467f8-gj5hn" [2bd8985a-3a79-426c-ab66-d433bba606f3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 04:14:57.145089    3959 system_pods.go:89] "etcd-ingress-addon-legacy-907000" [a0529388-b82c-4ec9-b7a7-937d10805a66] Running
	I0925 04:14:57.145100    3959 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-907000" [8da8b993-b1b8-4824-8477-f4d0dc312d76] Running
	I0925 04:14:57.145112    3959 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-907000" [c61e7a36-5ea2-424e-a9fc-fc9d3739a2cc] Running
	I0925 04:14:57.145124    3959 system_pods.go:89] "kube-proxy-hqw2l" [aa432a4d-d79b-4cd9-a274-340f672388e6] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 04:14:57.145138    3959 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-907000" [c3b9cecc-a1e6-41cc-872d-96c135cd32a1] Running
	I0925 04:14:57.145148    3959 system_pods.go:89] "storage-provisioner" [5f1e6569-51c3-4770-93c2-51ed3868828d] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0925 04:14:57.145223    3959 retry.go:31] will retry after 295.370857ms: missing components: kube-proxy
	I0925 04:14:57.447759    3959 system_pods.go:86] 7 kube-system pods found
	I0925 04:14:57.447782    3959 system_pods.go:89] "coredns-66bff467f8-gj5hn" [2bd8985a-3a79-426c-ab66-d433bba606f3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 04:14:57.447789    3959 system_pods.go:89] "etcd-ingress-addon-legacy-907000" [a0529388-b82c-4ec9-b7a7-937d10805a66] Running
	I0925 04:14:57.447795    3959 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-907000" [8da8b993-b1b8-4824-8477-f4d0dc312d76] Running
	I0925 04:14:57.447818    3959 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-907000" [c61e7a36-5ea2-424e-a9fc-fc9d3739a2cc] Running
	I0925 04:14:57.447827    3959 system_pods.go:89] "kube-proxy-hqw2l" [aa432a4d-d79b-4cd9-a274-340f672388e6] Running
	I0925 04:14:57.447835    3959 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-907000" [c3b9cecc-a1e6-41cc-872d-96c135cd32a1] Running
	I0925 04:14:57.447843    3959 system_pods.go:89] "storage-provisioner" [5f1e6569-51c3-4770-93c2-51ed3868828d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 04:14:57.447862    3959 system_pods.go:126] duration metric: took 509.1955ms to wait for k8s-apps to be running ...
	I0925 04:14:57.447872    3959 system_svc.go:44] waiting for kubelet service to be running ....
	I0925 04:14:57.447975    3959 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0925 04:14:57.457394    3959 system_svc.go:56] duration metric: took 9.517666ms WaitForService to wait for kubelet.
	I0925 04:14:57.457411    3959 kubeadm.go:581] duration metric: took 2.163750042s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0925 04:14:57.457436    3959 node_conditions.go:102] verifying NodePressure condition ...
	I0925 04:14:57.531659    3959 request.go:629] Waited for 74.18125ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/nodes
	I0925 04:14:57.535243    3959 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0925 04:14:57.535275    3959 node_conditions.go:123] node cpu capacity is 2
	I0925 04:14:57.535290    3959 node_conditions.go:105] duration metric: took 77.846125ms to run NodePressure ...
	I0925 04:14:57.535304    3959 start.go:228] waiting for startup goroutines ...
	I0925 04:14:57.535315    3959 start.go:233] waiting for cluster config update ...
	I0925 04:14:57.535328    3959 start.go:242] writing updated cluster config ...
	I0925 04:14:57.536041    3959 ssh_runner.go:195] Run: rm -f paused
	I0925 04:14:57.675035    3959 start.go:600] kubectl: 1.27.2, cluster: 1.18.20 (minor skew: 9)
	I0925 04:14:57.682997    3959 out.go:177] 
	W0925 04:14:57.689977    3959 out.go:239] ! /usr/local/bin/kubectl is version 1.27.2, which may have incompatibilities with Kubernetes 1.18.20.
	I0925 04:14:57.693965    3959 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I0925 04:14:57.702003    3959 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-907000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Mon 2023-09-25 11:14:12 UTC, ends at Mon 2023-09-25 11:16:04 UTC. --
	Sep 25 11:15:42 ingress-addon-legacy-907000 dockerd[1074]: time="2023-09-25T11:15:42.072029890Z" level=info msg="shim disconnected" id=b571f381b27c7f4c294d4947591cbdce4671b13fdf2ee4e61b6984bcd104fbdc namespace=moby
	Sep 25 11:15:42 ingress-addon-legacy-907000 dockerd[1074]: time="2023-09-25T11:15:42.072054473Z" level=warning msg="cleaning up after shim disconnected" id=b571f381b27c7f4c294d4947591cbdce4671b13fdf2ee4e61b6984bcd104fbdc namespace=moby
	Sep 25 11:15:42 ingress-addon-legacy-907000 dockerd[1074]: time="2023-09-25T11:15:42.072058306Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 25 11:15:54 ingress-addon-legacy-907000 dockerd[1068]: time="2023-09-25T11:15:54.333313765Z" level=info msg="ignoring event" container=4639ca8dd07b811fe08975716d936105957edaa4bdbec66da2b5a5ced8930b84 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 25 11:15:54 ingress-addon-legacy-907000 dockerd[1074]: time="2023-09-25T11:15:54.333403137Z" level=info msg="shim disconnected" id=4639ca8dd07b811fe08975716d936105957edaa4bdbec66da2b5a5ced8930b84 namespace=moby
	Sep 25 11:15:54 ingress-addon-legacy-907000 dockerd[1074]: time="2023-09-25T11:15:54.333430303Z" level=warning msg="cleaning up after shim disconnected" id=4639ca8dd07b811fe08975716d936105957edaa4bdbec66da2b5a5ced8930b84 namespace=moby
	Sep 25 11:15:54 ingress-addon-legacy-907000 dockerd[1074]: time="2023-09-25T11:15:54.333435428Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 25 11:15:58 ingress-addon-legacy-907000 dockerd[1074]: time="2023-09-25T11:15:58.373931986Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 25 11:15:58 ingress-addon-legacy-907000 dockerd[1074]: time="2023-09-25T11:15:58.374251478Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 25 11:15:58 ingress-addon-legacy-907000 dockerd[1074]: time="2023-09-25T11:15:58.374269352Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 25 11:15:58 ingress-addon-legacy-907000 dockerd[1074]: time="2023-09-25T11:15:58.374276352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 25 11:15:58 ingress-addon-legacy-907000 dockerd[1068]: time="2023-09-25T11:15:58.409313877Z" level=info msg="ignoring event" container=b1e1af0055198f6b553e87a66465544715be89fde8124e3fd601f3774a60806a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 25 11:15:58 ingress-addon-legacy-907000 dockerd[1074]: time="2023-09-25T11:15:58.409558913Z" level=info msg="shim disconnected" id=b1e1af0055198f6b553e87a66465544715be89fde8124e3fd601f3774a60806a namespace=moby
	Sep 25 11:15:58 ingress-addon-legacy-907000 dockerd[1074]: time="2023-09-25T11:15:58.409788907Z" level=warning msg="cleaning up after shim disconnected" id=b1e1af0055198f6b553e87a66465544715be89fde8124e3fd601f3774a60806a namespace=moby
	Sep 25 11:15:58 ingress-addon-legacy-907000 dockerd[1074]: time="2023-09-25T11:15:58.409798865Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 25 11:15:59 ingress-addon-legacy-907000 dockerd[1068]: time="2023-09-25T11:15:59.821543774Z" level=info msg="Container failed to exit within 2s of signal 15 - using the force" container=ea54cbda964a87be40fb40bf5a3864f09c50c31ab12065e6b5bd4f6c9797e42c
	Sep 25 11:15:59 ingress-addon-legacy-907000 dockerd[1068]: time="2023-09-25T11:15:59.837846893Z" level=info msg="Container failed to exit within 2s of signal 15 - using the force" container=ea54cbda964a87be40fb40bf5a3864f09c50c31ab12065e6b5bd4f6c9797e42c
	Sep 25 11:15:59 ingress-addon-legacy-907000 dockerd[1068]: time="2023-09-25T11:15:59.896785616Z" level=info msg="ignoring event" container=ea54cbda964a87be40fb40bf5a3864f09c50c31ab12065e6b5bd4f6c9797e42c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 25 11:15:59 ingress-addon-legacy-907000 dockerd[1074]: time="2023-09-25T11:15:59.898036792Z" level=info msg="shim disconnected" id=ea54cbda964a87be40fb40bf5a3864f09c50c31ab12065e6b5bd4f6c9797e42c namespace=moby
	Sep 25 11:15:59 ingress-addon-legacy-907000 dockerd[1074]: time="2023-09-25T11:15:59.898109624Z" level=warning msg="cleaning up after shim disconnected" id=ea54cbda964a87be40fb40bf5a3864f09c50c31ab12065e6b5bd4f6c9797e42c namespace=moby
	Sep 25 11:15:59 ingress-addon-legacy-907000 dockerd[1074]: time="2023-09-25T11:15:59.898121248Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 25 11:15:59 ingress-addon-legacy-907000 dockerd[1068]: time="2023-09-25T11:15:59.942930838Z" level=info msg="ignoring event" container=36034fddb596a54b80350e36a0e5116358ca5a0172097b5f92005b97a538965f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 25 11:15:59 ingress-addon-legacy-907000 dockerd[1074]: time="2023-09-25T11:15:59.943065959Z" level=info msg="shim disconnected" id=36034fddb596a54b80350e36a0e5116358ca5a0172097b5f92005b97a538965f namespace=moby
	Sep 25 11:15:59 ingress-addon-legacy-907000 dockerd[1074]: time="2023-09-25T11:15:59.943099083Z" level=warning msg="cleaning up after shim disconnected" id=36034fddb596a54b80350e36a0e5116358ca5a0172097b5f92005b97a538965f namespace=moby
	Sep 25 11:15:59 ingress-addon-legacy-907000 dockerd[1074]: time="2023-09-25T11:15:59.943104375Z" level=info msg="cleaning up dead shim" namespace=moby
	
	* 
	* ==> container status <==
	* CONTAINER ID   IMAGE                                      COMMAND                  CREATED              STATUS                          PORTS     NAMES
	b1e1af005519   a39a07419475                               "/hello-app"             6 seconds ago        Exited (1) 6 seconds ago                  k8s_hello-world-app_hello-world-app-5f5d8b66bb-86w8n_default_ba2b264e-5321-4351-a620-75cd9f683aa2_2
	d5fc7cf9543b   k8s.gcr.io/pause:3.2                       "/pause"                 25 seconds ago       Up 25 seconds                             k8s_POD_hello-world-app-5f5d8b66bb-86w8n_default_ba2b264e-5321-4351-a620-75cd9f683aa2_0
	0250dfc320ab   nginx                                      "/docker-entrypoint.…"   33 seconds ago       Up 33 seconds                             k8s_nginx_nginx_default_72eaa571-d81a-4913-8392-07ffdb6314ce_0
	5c09ab8b6169   k8s.gcr.io/pause:3.2                       "/pause"                 36 seconds ago       Up 36 seconds                             k8s_POD_nginx_default_72eaa571-d81a-4913-8392-07ffdb6314ce_0
	4639ca8dd07b   k8s.gcr.io/pause:3.2                       "/pause"                 53 seconds ago       Exited (0) 10 seconds ago                 k8s_POD_kube-ingress-dns-minikube_kube-system_4436ff59-9c33-48cf-bd23-17e6908254e6_0
	ea54cbda964a   registry.k8s.io/ingress-nginx/controller   "/usr/bin/dumb-init …"   55 seconds ago       Exited (137) 4 seconds ago                k8s_controller_ingress-nginx-controller-7fcf777cb7-gzljn_ingress-nginx_18001cd6-bd2b-4f93-a728-6d701a3e4995_0
	36034fddb596   k8s.gcr.io/pause:3.2                       "/pause"                 About a minute ago   Exited (0) 4 seconds ago                  k8s_POD_ingress-nginx-controller-7fcf777cb7-gzljn_ingress-nginx_18001cd6-bd2b-4f93-a728-6d701a3e4995_0
	a80dec49f27a   a883f7fc3561                               "/kube-webhook-certg…"   About a minute ago   Exited (0) About a minute ago             k8s_patch_ingress-nginx-admission-patch-zxrbx_ingress-nginx_a4930049-a2c1-451c-9bc9-786fd8c7634d_1
	564c8d1f26b7   jettech/kube-webhook-certgen               "/kube-webhook-certg…"   About a minute ago   Exited (0) About a minute ago             k8s_create_ingress-nginx-admission-create-zvhbf_ingress-nginx_46168b12-52c7-4b76-bdff-0dee5a71aa66_0
	985fa555ccb0   gcr.io/k8s-minikube/storage-provisioner    "/storage-provisioner"   About a minute ago   Up About a minute                         k8s_storage-provisioner_storage-provisioner_kube-system_5f1e6569-51c3-4770-93c2-51ed3868828d_0
	6bbf51dc520a   k8s.gcr.io/pause:3.2                       "/pause"                 About a minute ago   Exited (0) About a minute ago             k8s_POD_ingress-nginx-admission-create-zvhbf_ingress-nginx_46168b12-52c7-4b76-bdff-0dee5a71aa66_0
	09a5c6449c0b   k8s.gcr.io/pause:3.2                       "/pause"                 About a minute ago   Exited (0) About a minute ago             k8s_POD_ingress-nginx-admission-patch-zxrbx_ingress-nginx_a4930049-a2c1-451c-9bc9-786fd8c7634d_0
	75c7e32af410   k8s.gcr.io/pause:3.2                       "/pause"                 About a minute ago   Up About a minute                         k8s_POD_storage-provisioner_kube-system_5f1e6569-51c3-4770-93c2-51ed3868828d_0
	d27c140adf25   565297bc6f7d                               "/usr/local/bin/kube…"   About a minute ago   Up About a minute                         k8s_kube-proxy_kube-proxy-hqw2l_kube-system_aa432a4d-d79b-4cd9-a274-340f672388e6_0
	36df1a4f36b8   k8s.gcr.io/pause:3.2                       "/pause"                 About a minute ago   Up About a minute                         k8s_POD_kube-proxy-hqw2l_kube-system_aa432a4d-d79b-4cd9-a274-340f672388e6_0
	17f82118281d   6e17ba78cf3e                               "/coredns -conf /etc…"   About a minute ago   Up About a minute                         k8s_coredns_coredns-66bff467f8-gj5hn_kube-system_2bd8985a-3a79-426c-ab66-d433bba606f3_0
	f40cd7136490   k8s.gcr.io/pause:3.2                       "/pause"                 About a minute ago   Up About a minute                         k8s_POD_coredns-66bff467f8-gj5hn_kube-system_2bd8985a-3a79-426c-ab66-d433bba606f3_0
	992d36243b87   095f37015706                               "kube-scheduler --au…"   About a minute ago   Up About a minute                         k8s_kube-scheduler_kube-scheduler-ingress-addon-legacy-907000_kube-system_d12e497b0008e22acbcd5a9cf2dd48ac_0
	bbe8d945e2c2   ab707b0a0ea3                               "etcd --advertise-cl…"   About a minute ago   Up About a minute                         k8s_etcd_etcd-ingress-addon-legacy-907000_kube-system_66863319d89d6d88ba35da7d8dca5ca5_0
	84cf8b9d8ebf   68a4fac29a86                               "kube-controller-man…"   About a minute ago   Up About a minute                         k8s_kube-controller-manager_kube-controller-manager-ingress-addon-legacy-907000_kube-system_b395a1e17534e69e27827b1f8d737725_0
	7605fe8d65b2   2694cf044d66                               "kube-apiserver --ad…"   About a minute ago   Up About a minute                         k8s_kube-apiserver_kube-apiserver-ingress-addon-legacy-907000_kube-system_36bf945afdf7c8fc8d73074b2bf4e3c3_0
	a3775a312c90   k8s.gcr.io/pause:3.2                       "/pause"                 About a minute ago   Up About a minute                         k8s_POD_etcd-ingress-addon-legacy-907000_kube-system_66863319d89d6d88ba35da7d8dca5ca5_0
	4ff02a5befd0   k8s.gcr.io/pause:3.2                       "/pause"                 About a minute ago   Up About a minute                         k8s_POD_kube-controller-manager-ingress-addon-legacy-907000_kube-system_b395a1e17534e69e27827b1f8d737725_0
	ce467556340f   k8s.gcr.io/pause:3.2                       "/pause"                 About a minute ago   Up About a minute                         k8s_POD_kube-apiserver-ingress-addon-legacy-907000_kube-system_36bf945afdf7c8fc8d73074b2bf4e3c3_0
	cc858905d13f   k8s.gcr.io/pause:3.2                       "/pause"                 About a minute ago   Up About a minute                         k8s_POD_kube-scheduler-ingress-addon-legacy-907000_kube-system_d12e497b0008e22acbcd5a9cf2dd48ac_0
	time="2023-09-25T11:16:04Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/dockershim.sock\": rpc error: code = Unimplemented desc = unknown service runtime.v1.RuntimeService"
	
	* 
	* ==> coredns [17f82118281d] <==
	* [INFO] 172.17.0.1:41211 - 49049 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000019458s
	[INFO] 172.17.0.1:59047 - 55355 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000033792s
	[INFO] 172.17.0.1:41211 - 9016 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000009834s
	[INFO] 172.17.0.1:59047 - 3353 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000027875s
	[INFO] 172.17.0.1:59047 - 42402 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000025667s
	[INFO] 172.17.0.1:41211 - 2070 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000048916s
	[INFO] 172.17.0.1:31139 - 1606 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000009667s
	[INFO] 172.17.0.1:59047 - 42856 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000111124s
	[INFO] 172.17.0.1:31139 - 12246 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000031124s
	[INFO] 172.17.0.1:31139 - 37917 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000082083s
	[INFO] 172.17.0.1:31139 - 32565 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00000975s
	[INFO] 172.17.0.1:31139 - 30328 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000009125s
	[INFO] 172.17.0.1:59047 - 32952 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000078957s
	[INFO] 172.17.0.1:31139 - 38654 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000008542s
	[INFO] 172.17.0.1:41211 - 23275 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000007583s
	[INFO] 172.17.0.1:31139 - 9153 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000011666s
	[INFO] 172.17.0.1:41211 - 18043 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.0000075s
	[INFO] 172.17.0.1:41211 - 63495 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000033916s
	[INFO] 172.17.0.1:15325 - 61769 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000015791s
	[INFO] 172.17.0.1:15325 - 49168 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000009999s
	[INFO] 172.17.0.1:15325 - 51442 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000008542s
	[INFO] 172.17.0.1:15325 - 2168 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000008333s
	[INFO] 172.17.0.1:15325 - 11762 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000007667s
	[INFO] 172.17.0.1:15325 - 31222 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000047333s
	[INFO] 172.17.0.1:15325 - 61810 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000032s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-907000
	Roles:              master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ingress-addon-legacy-907000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1bf6c3d5317028f348e55ea19d261973a6487d3c
	                    minikube.k8s.io/name=ingress-addon-legacy-907000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_25T04_14_40_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 25 Sep 2023 11:14:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-907000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 25 Sep 2023 11:15:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 25 Sep 2023 11:15:46 +0000   Mon, 25 Sep 2023 11:14:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 25 Sep 2023 11:15:46 +0000   Mon, 25 Sep 2023 11:14:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 25 Sep 2023 11:15:46 +0000   Mon, 25 Sep 2023 11:14:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 25 Sep 2023 11:15:46 +0000   Mon, 25 Sep 2023 11:14:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.6
	  Hostname:    ingress-addon-legacy-907000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             4003124Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             4003124Ki
	  pods:               110
	System Info:
	  Machine ID:                 8f94326e96a44990a084e00595075db2
	  System UUID:                8f94326e96a44990a084e00595075db2
	  Boot ID:                    6f266cc8-0f4f-4009-b7d5-973ebaf562c9
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.6
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-86w8n                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         37s
	  kube-system                 coredns-66bff467f8-gj5hn                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     69s
	  kube-system                 etcd-ingress-addon-legacy-907000                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         78s
	  kube-system                 kube-apiserver-ingress-addon-legacy-907000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         78s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-907000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         78s
	  kube-system                 kube-proxy-hqw2l                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         69s
	  kube-system                 kube-scheduler-ingress-addon-legacy-907000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         78s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         69s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             70Mi (1%!)(MISSING)   170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From        Message
	  ----    ------                   ----  ----        -------
	  Normal  Starting                 78s   kubelet     Starting kubelet.
	  Normal  NodeAllocatableEnforced  78s   kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  78s   kubelet     Node ingress-addon-legacy-907000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    78s   kubelet     Node ingress-addon-legacy-907000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     78s   kubelet     Node ingress-addon-legacy-907000 status is now: NodeHasSufficientPID
	  Normal  Starting                 68s   kube-proxy  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [Sep25 11:14] ACPI: SRAT not present
	[  +0.000000] KASLR disabled due to lack of seed
	[  +0.641430] EINJ: EINJ table not found.
	[  +0.516248] systemd-fstab-generator[116]: Ignoring "noauto" for root device
	[  +0.043118] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000860] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +4.109113] systemd-fstab-generator[480]: Ignoring "noauto" for root device
	[  +0.075373] systemd-fstab-generator[491]: Ignoring "noauto" for root device
	[  +0.413962] systemd-fstab-generator[711]: Ignoring "noauto" for root device
	[  +0.172932] systemd-fstab-generator[747]: Ignoring "noauto" for root device
	[  +0.083261] systemd-fstab-generator[758]: Ignoring "noauto" for root device
	[  +0.083669] systemd-fstab-generator[771]: Ignoring "noauto" for root device
	[  +4.317436] systemd-fstab-generator[1061]: Ignoring "noauto" for root device
	[  +1.652522] kauditd_printk_skb: 53 callbacks suppressed
	[  +2.580180] systemd-fstab-generator[1532]: Ignoring "noauto" for root device
	[  +7.862164] kauditd_printk_skb: 29 callbacks suppressed
	[  +0.088319] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[  +6.236994] systemd-fstab-generator[2603]: Ignoring "noauto" for root device
	[ +16.263433] kauditd_printk_skb: 7 callbacks suppressed
	[Sep25 11:15] kauditd_printk_skb: 15 callbacks suppressed
	[  +0.906960] TCP: eth0: Driver has suspect GRO implementation, TCP performance may be compromised.
	[ +36.523470] kauditd_printk_skb: 3 callbacks suppressed
	
	* 
	* ==> etcd [bbe8d945e2c2] <==
	* raft2023/09/25 11:14:34 INFO: ed054832bd1917e1 became follower at term 0
	raft2023/09/25 11:14:34 INFO: newRaft ed054832bd1917e1 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	raft2023/09/25 11:14:34 INFO: ed054832bd1917e1 became follower at term 1
	raft2023/09/25 11:14:34 INFO: ed054832bd1917e1 switched to configuration voters=(17079136544630577121)
	2023-09-25 11:14:34.799572 W | auth: simple token is not cryptographically signed
	2023-09-25 11:14:34.874688 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2023-09-25 11:14:34.889864 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-09-25 11:14:34.889932 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-09-25 11:14:34.890086 I | embed: listening for peers on 192.168.105.6:2380
	2023-09-25 11:14:34.890142 I | etcdserver: ed054832bd1917e1 as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2023/09/25 11:14:34 INFO: ed054832bd1917e1 switched to configuration voters=(17079136544630577121)
	2023-09-25 11:14:34.890351 I | etcdserver/membership: added member ed054832bd1917e1 [https://192.168.105.6:2380] to cluster 45a39c2c59b0edf4
	raft2023/09/25 11:14:34 INFO: ed054832bd1917e1 is starting a new election at term 1
	raft2023/09/25 11:14:34 INFO: ed054832bd1917e1 became candidate at term 2
	raft2023/09/25 11:14:34 INFO: ed054832bd1917e1 received MsgVoteResp from ed054832bd1917e1 at term 2
	raft2023/09/25 11:14:34 INFO: ed054832bd1917e1 became leader at term 2
	raft2023/09/25 11:14:34 INFO: raft.node: ed054832bd1917e1 elected leader ed054832bd1917e1 at term 2
	2023-09-25 11:14:34.890693 I | etcdserver: published {Name:ingress-addon-legacy-907000 ClientURLs:[https://192.168.105.6:2379]} to cluster 45a39c2c59b0edf4
	2023-09-25 11:14:34.890729 I | embed: ready to serve client requests
	2023-09-25 11:14:34.891350 I | embed: serving client requests on 192.168.105.6:2379
	2023-09-25 11:14:34.891390 I | etcdserver: setting up the initial cluster version to 3.4
	2023-09-25 11:14:34.891547 I | embed: ready to serve client requests
	2023-09-25 11:14:34.891993 I | embed: serving client requests on 127.0.0.1:2379
	2023-09-25 11:14:34.892106 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-09-25 11:14:34.892166 I | etcdserver/api: enabled capabilities for version 3.4
	
	* 
	* ==> kernel <==
	*  11:16:04 up 1 min,  0 users,  load average: 0.60, 0.25, 0.09
	Linux ingress-addon-legacy-907000 5.10.57 #1 SMP PREEMPT Mon Sep 18 20:10:16 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [7605fe8d65b2] <==
	* I0925 11:14:37.046540       1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	I0925 11:14:37.046554       1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
	I0925 11:14:37.123832       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0925 11:14:37.123831       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I0925 11:14:37.123854       1 cache.go:39] Caches are synced for autoregister controller
	I0925 11:14:37.123865       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0925 11:14:37.161283       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I0925 11:14:38.022142       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0925 11:14:38.022210       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0925 11:14:38.033237       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I0925 11:14:38.038975       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I0925 11:14:38.039017       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I0925 11:14:38.179843       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0925 11:14:38.190991       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0925 11:14:38.288548       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.105.6]
	I0925 11:14:38.288927       1 controller.go:609] quota admission added evaluator for: endpoints
	I0925 11:14:38.290821       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0925 11:14:39.330573       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I0925 11:14:39.864088       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I0925 11:14:40.036264       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I0925 11:14:46.292787       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I0925 11:14:55.563320       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I0925 11:14:55.661024       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I0925 11:14:57.974387       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I0925 11:15:27.647657       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	
	* 
	* ==> kube-controller-manager [84cf8b9d8ebf] <==
	* W0925 11:14:55.613804       1 node_lifecycle_controller.go:1048] Missing timestamp for Node ingress-addon-legacy-907000. Assuming now as a timestamp.
	I0925 11:14:55.613824       1 node_lifecycle_controller.go:1249] Controller detected that zone  is now in state Normal.
	I0925 11:14:55.613899       1 taint_manager.go:187] Starting NoExecuteTaintManager
	I0925 11:14:55.614035       1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ingress-addon-legacy-907000", UID:"fb912097-0d18-4e27-8307-710daead3bce", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node ingress-addon-legacy-907000 event: Registered Node ingress-addon-legacy-907000 in Controller
	I0925 11:14:55.615643       1 shared_informer.go:230] Caches are synced for resource quota 
	I0925 11:14:55.629740       1 shared_informer.go:230] Caches are synced for stateful set 
	I0925 11:14:55.630065       1 shared_informer.go:230] Caches are synced for GC 
	I0925 11:14:55.639216       1 shared_informer.go:230] Caches are synced for job 
	I0925 11:14:55.640745       1 shared_informer.go:230] Caches are synced for endpoint_slice 
	I0925 11:14:55.659130       1 shared_informer.go:230] Caches are synced for daemon sets 
	I0925 11:14:55.660114       1 shared_informer.go:230] Caches are synced for PVC protection 
	I0925 11:14:55.660179       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0925 11:14:55.664528       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"b4c7b7f2-b169-4fbc-98d7-54e600702f7a", APIVersion:"apps/v1", ResourceVersion:"205", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-hqw2l
	I0925 11:14:55.964482       1 request.go:621] Throttling request took 1.049207907s, request: GET:https://control-plane.minikube.internal:8443/apis/apiextensions.k8s.io/v1beta1?timeout=32s
	I0925 11:14:56.565097       1 shared_informer.go:223] Waiting for caches to sync for resource quota
	I0925 11:14:56.565133       1 shared_informer.go:230] Caches are synced for resource quota 
	I0925 11:14:57.970474       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"6cb539e2-a494-436e-a651-ca3550900545", APIVersion:"apps/v1", ResourceVersion:"404", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I0925 11:14:57.982782       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"c806643c-a028-4cb0-ae79-ca2d7885ee6d", APIVersion:"apps/v1", ResourceVersion:"406", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-gzljn
	I0925 11:14:57.997312       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"6c46d414-8455-49fa-ba7e-c323e00b4005", APIVersion:"batch/v1", ResourceVersion:"416", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-zxrbx
	I0925 11:14:57.998056       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"103a3417-5afa-4624-ba80-bd9faa3c2147", APIVersion:"batch/v1", ResourceVersion:"408", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-zvhbf
	I0925 11:15:01.498937       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"103a3417-5afa-4624-ba80-bd9faa3c2147", APIVersion:"batch/v1", ResourceVersion:"425", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0925 11:15:02.515747       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"6c46d414-8455-49fa-ba7e-c323e00b4005", APIVersion:"batch/v1", ResourceVersion:"424", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0925 11:15:38.928262       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"0c85cb05-9fd6-404d-989a-23eac4ddb73a", APIVersion:"apps/v1", ResourceVersion:"565", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I0925 11:15:38.936626       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"63fe058d-ff5c-4004-a39e-94d45ef63909", APIVersion:"apps/v1", ResourceVersion:"566", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-86w8n
	E0925 11:16:02.568870       1 tokens_controller.go:261] error synchronizing serviceaccount ingress-nginx/default: secrets "default-token-flwlz" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated
	
	* 
	* ==> kube-proxy [d27c140adf25] <==
	* W0925 11:14:56.434026       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I0925 11:14:56.438126       1 node.go:136] Successfully retrieved node IP: 192.168.105.6
	I0925 11:14:56.438146       1 server_others.go:186] Using iptables Proxier.
	I0925 11:14:56.438369       1 server.go:583] Version: v1.18.20
	I0925 11:14:56.440440       1 config.go:315] Starting service config controller
	I0925 11:14:56.440488       1 shared_informer.go:223] Waiting for caches to sync for service config
	I0925 11:14:56.441068       1 config.go:133] Starting endpoints config controller
	I0925 11:14:56.441077       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I0925 11:14:56.543301       1 shared_informer.go:230] Caches are synced for endpoints config 
	I0925 11:14:56.543302       1 shared_informer.go:230] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [992d36243b87] <==
	* I0925 11:14:37.072778       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0925 11:14:37.072818       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0925 11:14:37.073940       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0925 11:14:37.073957       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0925 11:14:37.073998       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I0925 11:14:37.074032       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0925 11:14:37.075650       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0925 11:14:37.075824       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0925 11:14:37.075862       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0925 11:14:37.075863       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0925 11:14:37.075887       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0925 11:14:37.075891       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0925 11:14:37.075908       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0925 11:14:37.075915       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0925 11:14:37.075929       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0925 11:14:37.075950       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0925 11:14:37.076110       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0925 11:14:37.076496       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0925 11:14:37.914450       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0925 11:14:38.020364       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0925 11:14:38.116003       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0925 11:14:38.135987       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0925 11:14:38.142056       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0925 11:14:38.674205       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E0925 11:14:55.594273       1 factory.go:503] pod: kube-system/storage-provisioner is already present in the active queue
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-09-25 11:14:12 UTC, ends at Mon 2023-09-25 11:16:05 UTC. --
	Sep 25 11:15:44 ingress-addon-legacy-907000 kubelet[2609]: I0925 11:15:44.043148    2609 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: b571f381b27c7f4c294d4947591cbdce4671b13fdf2ee4e61b6984bcd104fbdc
	Sep 25 11:15:44 ingress-addon-legacy-907000 kubelet[2609]: E0925 11:15:44.043905    2609 pod_workers.go:191] Error syncing pod ba2b264e-5321-4351-a620-75cd9f683aa2 ("hello-world-app-5f5d8b66bb-86w8n_default(ba2b264e-5321-4351-a620-75cd9f683aa2)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-86w8n_default(ba2b264e-5321-4351-a620-75cd9f683aa2)"
	Sep 25 11:15:45 ingress-addon-legacy-907000 kubelet[2609]: I0925 11:15:45.319968    2609 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 65799fa2b44c8ca01a4cd65b4196fb45721f096bd6540bb14f05ba5c9f6d4e81
	Sep 25 11:15:45 ingress-addon-legacy-907000 kubelet[2609]: E0925 11:15:45.320314    2609 pod_workers.go:191] Error syncing pod 4436ff59-9c33-48cf-bd23-17e6908254e6 ("kube-ingress-dns-minikube_kube-system(4436ff59-9c33-48cf-bd23-17e6908254e6)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with CrashLoopBackOff: "back-off 20s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(4436ff59-9c33-48cf-bd23-17e6908254e6)"
	Sep 25 11:15:54 ingress-addon-legacy-907000 kubelet[2609]: I0925 11:15:54.361632    2609 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-9gjfj" (UniqueName: "kubernetes.io/secret/4436ff59-9c33-48cf-bd23-17e6908254e6-minikube-ingress-dns-token-9gjfj") pod "4436ff59-9c33-48cf-bd23-17e6908254e6" (UID: "4436ff59-9c33-48cf-bd23-17e6908254e6")
	Sep 25 11:15:54 ingress-addon-legacy-907000 kubelet[2609]: I0925 11:15:54.364377    2609 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4436ff59-9c33-48cf-bd23-17e6908254e6-minikube-ingress-dns-token-9gjfj" (OuterVolumeSpecName: "minikube-ingress-dns-token-9gjfj") pod "4436ff59-9c33-48cf-bd23-17e6908254e6" (UID: "4436ff59-9c33-48cf-bd23-17e6908254e6"). InnerVolumeSpecName "minikube-ingress-dns-token-9gjfj". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Sep 25 11:15:54 ingress-addon-legacy-907000 kubelet[2609]: I0925 11:15:54.464189    2609 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-9gjfj" (UniqueName: "kubernetes.io/secret/4436ff59-9c33-48cf-bd23-17e6908254e6-minikube-ingress-dns-token-9gjfj") on node "ingress-addon-legacy-907000" DevicePath ""
	Sep 25 11:15:55 ingress-addon-legacy-907000 kubelet[2609]: I0925 11:15:55.202964    2609 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 65799fa2b44c8ca01a4cd65b4196fb45721f096bd6540bb14f05ba5c9f6d4e81
	Sep 25 11:15:57 ingress-addon-legacy-907000 kubelet[2609]: E0925 11:15:57.817560    2609 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-gzljn.1788201a7b605436", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-gzljn", UID:"18001cd6-bd2b-4f93-a728-6d701a3e4995", APIVersion:"v1", ResourceVersion:"413", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-907000"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc13c78db70a55236, ext:77981801759, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc13c78db70a55236, ext:77981801759, loc:(*time.Location)(0x6a0ef20)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-gzljn.1788201a7b605436" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Sep 25 11:15:57 ingress-addon-legacy-907000 kubelet[2609]: E0925 11:15:57.836445    2609 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-gzljn.1788201a7b605436", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-gzljn", UID:"18001cd6-bd2b-4f93-a728-6d701a3e4995", APIVersion:"v1", ResourceVersion:"413", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-907000"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc13c78db70a55236, ext:77981801759, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc13c78db71779d7d, ext:77995583548, loc:(*time.Location)(0x6a0ef20)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-gzljn.1788201a7b605436" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Sep 25 11:15:58 ingress-addon-legacy-907000 kubelet[2609]: I0925 11:15:58.320896    2609 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: b571f381b27c7f4c294d4947591cbdce4671b13fdf2ee4e61b6984bcd104fbdc
	Sep 25 11:15:58 ingress-addon-legacy-907000 kubelet[2609]: W0925 11:15:58.428147    2609 container.go:412] Failed to create summary reader for "/kubepods/besteffort/podba2b264e-5321-4351-a620-75cd9f683aa2/b1e1af0055198f6b553e87a66465544715be89fde8124e3fd601f3774a60806a": none of the resources are being tracked.
	Sep 25 11:15:59 ingress-addon-legacy-907000 kubelet[2609]: W0925 11:15:59.277757    2609 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-world-app-5f5d8b66bb-86w8n through plugin: invalid network status for
	Sep 25 11:15:59 ingress-addon-legacy-907000 kubelet[2609]: I0925 11:15:59.284992    2609 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: b571f381b27c7f4c294d4947591cbdce4671b13fdf2ee4e61b6984bcd104fbdc
	Sep 25 11:15:59 ingress-addon-legacy-907000 kubelet[2609]: I0925 11:15:59.285729    2609 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: b1e1af0055198f6b553e87a66465544715be89fde8124e3fd601f3774a60806a
	Sep 25 11:15:59 ingress-addon-legacy-907000 kubelet[2609]: E0925 11:15:59.286303    2609 pod_workers.go:191] Error syncing pod ba2b264e-5321-4351-a620-75cd9f683aa2 ("hello-world-app-5f5d8b66bb-86w8n_default(ba2b264e-5321-4351-a620-75cd9f683aa2)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-86w8n_default(ba2b264e-5321-4351-a620-75cd9f683aa2)"
	Sep 25 11:16:00 ingress-addon-legacy-907000 kubelet[2609]: W0925 11:16:00.308813    2609 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-world-app-5f5d8b66bb-86w8n through plugin: invalid network status for
	Sep 25 11:16:00 ingress-addon-legacy-907000 kubelet[2609]: W0925 11:16:00.318391    2609 pod_container_deletor.go:77] Container "36034fddb596a54b80350e36a0e5116358ca5a0172097b5f92005b97a538965f" not found in pod's containers
	Sep 25 11:16:02 ingress-addon-legacy-907000 kubelet[2609]: I0925 11:16:02.026526    2609 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/18001cd6-bd2b-4f93-a728-6d701a3e4995-webhook-cert") pod "18001cd6-bd2b-4f93-a728-6d701a3e4995" (UID: "18001cd6-bd2b-4f93-a728-6d701a3e4995")
	Sep 25 11:16:02 ingress-addon-legacy-907000 kubelet[2609]: I0925 11:16:02.026780    2609 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-sctbd" (UniqueName: "kubernetes.io/secret/18001cd6-bd2b-4f93-a728-6d701a3e4995-ingress-nginx-token-sctbd") pod "18001cd6-bd2b-4f93-a728-6d701a3e4995" (UID: "18001cd6-bd2b-4f93-a728-6d701a3e4995")
	Sep 25 11:16:02 ingress-addon-legacy-907000 kubelet[2609]: I0925 11:16:02.035274    2609 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18001cd6-bd2b-4f93-a728-6d701a3e4995-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "18001cd6-bd2b-4f93-a728-6d701a3e4995" (UID: "18001cd6-bd2b-4f93-a728-6d701a3e4995"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Sep 25 11:16:02 ingress-addon-legacy-907000 kubelet[2609]: I0925 11:16:02.035884    2609 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18001cd6-bd2b-4f93-a728-6d701a3e4995-ingress-nginx-token-sctbd" (OuterVolumeSpecName: "ingress-nginx-token-sctbd") pod "18001cd6-bd2b-4f93-a728-6d701a3e4995" (UID: "18001cd6-bd2b-4f93-a728-6d701a3e4995"). InnerVolumeSpecName "ingress-nginx-token-sctbd". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Sep 25 11:16:02 ingress-addon-legacy-907000 kubelet[2609]: I0925 11:16:02.127683    2609 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/18001cd6-bd2b-4f93-a728-6d701a3e4995-webhook-cert") on node "ingress-addon-legacy-907000" DevicePath ""
	Sep 25 11:16:02 ingress-addon-legacy-907000 kubelet[2609]: I0925 11:16:02.127785    2609 reconciler.go:319] Volume detached for volume "ingress-nginx-token-sctbd" (UniqueName: "kubernetes.io/secret/18001cd6-bd2b-4f93-a728-6d701a3e4995-ingress-nginx-token-sctbd") on node "ingress-addon-legacy-907000" DevicePath ""
	Sep 25 11:16:02 ingress-addon-legacy-907000 kubelet[2609]: W0925 11:16:02.356670    2609 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/18001cd6-bd2b-4f93-a728-6d701a3e4995/volumes" does not exist
	
	* 
	* ==> storage-provisioner [985fa555ccb0] <==
	* I0925 11:14:59.062516       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0925 11:14:59.066356       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0925 11:14:59.066377       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0925 11:14:59.069251       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0925 11:14:59.069745       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-907000_c7ff0f87-15a4-48a3-938c-ecc9156df8b9!
	I0925 11:14:59.070514       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9cbf888d-c5a7-40d4-b077-8303dae780b0", APIVersion:"v1", ResourceVersion:"442", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-907000_c7ff0f87-15a4-48a3-938c-ecc9156df8b9 became leader
	I0925 11:14:59.170354       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-907000_c7ff0f87-15a4-48a3-938c-ecc9156df8b9!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p ingress-addon-legacy-907000 -n ingress-addon-legacy-907000
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-907000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (53.95s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.09s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-980000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-980000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (10.019870334s)

                                                
                                                
-- stdout --
	* [mount-start-1-980000] minikube v1.31.2 on Darwin 13.6 (arm64)
	  - MINIKUBE_LOCATION=17297
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17297-1010/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17297-1010/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-980000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-980000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-980000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-980000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-980000 -n mount-start-1-980000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-980000 -n mount-start-1-980000: exit status 7 (68.256625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-980000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.09s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (9.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-352000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:85: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-352000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.762666042s)

                                                
                                                
-- stdout --
	* [multinode-352000] minikube v1.31.2 on Darwin 13.6 (arm64)
	  - MINIKUBE_LOCATION=17297
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17297-1010/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17297-1010/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node multinode-352000 in cluster multinode-352000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-352000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 04:18:14.073049    4295 out.go:296] Setting OutFile to fd 1 ...
	I0925 04:18:14.073171    4295 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 04:18:14.073174    4295 out.go:309] Setting ErrFile to fd 2...
	I0925 04:18:14.073176    4295 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 04:18:14.073305    4295 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17297-1010/.minikube/bin
	I0925 04:18:14.074354    4295 out.go:303] Setting JSON to false
	I0925 04:18:14.089622    4295 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2869,"bootTime":1695637825,"procs":414,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.6","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0925 04:18:14.089719    4295 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0925 04:18:14.093660    4295 out.go:177] * [multinode-352000] minikube v1.31.2 on Darwin 13.6 (arm64)
	I0925 04:18:14.099539    4295 out.go:177]   - MINIKUBE_LOCATION=17297
	I0925 04:18:14.103553    4295 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17297-1010/kubeconfig
	I0925 04:18:14.099609    4295 notify.go:220] Checking for updates...
	I0925 04:18:14.106461    4295 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0925 04:18:14.109554    4295 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 04:18:14.112505    4295 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17297-1010/.minikube
	I0925 04:18:14.113921    4295 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0925 04:18:14.117682    4295 driver.go:373] Setting default libvirt URI to qemu:///system
	I0925 04:18:14.121490    4295 out.go:177] * Using the qemu2 driver based on user configuration
	I0925 04:18:14.126484    4295 start.go:298] selected driver: qemu2
	I0925 04:18:14.126490    4295 start.go:902] validating driver "qemu2" against <nil>
	I0925 04:18:14.126495    4295 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0925 04:18:14.128468    4295 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0925 04:18:14.131565    4295 out.go:177] * Automatically selected the socket_vmnet network
	I0925 04:18:14.134621    4295 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0925 04:18:14.134636    4295 cni.go:84] Creating CNI manager for ""
	I0925 04:18:14.134641    4295 cni.go:136] 0 nodes found, recommending kindnet
	I0925 04:18:14.134644    4295 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0925 04:18:14.134649    4295 start_flags.go:321] config:
	{Name:multinode-352000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:multinode-352000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:doc
ker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAge
ntPID:0 AutoPauseInterval:1m0s}
	I0925 04:18:14.138960    4295 iso.go:125] acquiring lock: {Name:mkf881a60cf9fd1672567914305ff6f7a4f13809 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 04:18:14.146598    4295 out.go:177] * Starting control plane node multinode-352000 in cluster multinode-352000
	I0925 04:18:14.150442    4295 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0925 04:18:14.150463    4295 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I0925 04:18:14.150480    4295 cache.go:57] Caching tarball of preloaded images
	I0925 04:18:14.150541    4295 preload.go:174] Found /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0925 04:18:14.150548    4295 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I0925 04:18:14.150766    4295 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/multinode-352000/config.json ...
	I0925 04:18:14.150779    4295 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/multinode-352000/config.json: {Name:mkd1b8f717d126193f91a76fc2b7ebba9490cdee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 04:18:14.150994    4295 start.go:365] acquiring machines lock for multinode-352000: {Name:mk918d99818e2cf21e5912bc291ed18d4b442ba6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 04:18:14.151023    4295 start.go:369] acquired machines lock for "multinode-352000" in 24.084µs
	I0925 04:18:14.151033    4295 start.go:93] Provisioning new machine with config: &{Name:multinode-352000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.28.2 ClusterName:multinode-352000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSiz
e:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 04:18:14.151060    4295 start.go:125] createHost starting for "" (driver="qemu2")
	I0925 04:18:14.166520    4295 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0925 04:18:14.182898    4295 start.go:159] libmachine.API.Create for "multinode-352000" (driver="qemu2")
	I0925 04:18:14.182920    4295 client.go:168] LocalClient.Create starting
	I0925 04:18:14.182979    4295 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/ca.pem
	I0925 04:18:14.183004    4295 main.go:141] libmachine: Decoding PEM data...
	I0925 04:18:14.183014    4295 main.go:141] libmachine: Parsing certificate...
	I0925 04:18:14.183050    4295 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/cert.pem
	I0925 04:18:14.183069    4295 main.go:141] libmachine: Decoding PEM data...
	I0925 04:18:14.183077    4295 main.go:141] libmachine: Parsing certificate...
	I0925 04:18:14.183375    4295 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17297-1010/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I0925 04:18:14.327572    4295 main.go:141] libmachine: Creating SSH key...
	I0925 04:18:14.414010    4295 main.go:141] libmachine: Creating Disk image...
	I0925 04:18:14.414017    4295 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0925 04:18:14.414168    4295 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/multinode-352000/disk.qcow2.raw /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/multinode-352000/disk.qcow2
	I0925 04:18:14.422673    4295 main.go:141] libmachine: STDOUT: 
	I0925 04:18:14.422700    4295 main.go:141] libmachine: STDERR: 
	I0925 04:18:14.422759    4295 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/multinode-352000/disk.qcow2 +20000M
	I0925 04:18:14.430020    4295 main.go:141] libmachine: STDOUT: Image resized.
	
	I0925 04:18:14.430033    4295 main.go:141] libmachine: STDERR: 
	I0925 04:18:14.430052    4295 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/multinode-352000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/multinode-352000/disk.qcow2
	I0925 04:18:14.430071    4295 main.go:141] libmachine: Starting QEMU VM...
	I0925 04:18:14.430116    4295 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/multinode-352000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/multinode-352000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/multinode-352000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:39:2d:77:04:07 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/multinode-352000/disk.qcow2
	I0925 04:18:14.431652    4295 main.go:141] libmachine: STDOUT: 
	I0925 04:18:14.431664    4295 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 04:18:14.431683    4295 client.go:171] LocalClient.Create took 248.756958ms
	I0925 04:18:16.433864    4295 start.go:128] duration metric: createHost completed in 2.282782709s
	I0925 04:18:16.433961    4295 start.go:83] releasing machines lock for "multinode-352000", held for 2.282896625s
	W0925 04:18:16.434006    4295 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 04:18:16.442275    4295 out.go:177] * Deleting "multinode-352000" in qemu2 ...
	W0925 04:18:16.461753    4295 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 04:18:16.461776    4295 start.go:703] Will try again in 5 seconds ...
	I0925 04:18:21.463968    4295 start.go:365] acquiring machines lock for multinode-352000: {Name:mk918d99818e2cf21e5912bc291ed18d4b442ba6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 04:18:21.464484    4295 start.go:369] acquired machines lock for "multinode-352000" in 431.25µs
	I0925 04:18:21.464619    4295 start.go:93] Provisioning new machine with config: &{Name:multinode-352000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.28.2 ClusterName:multinode-352000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSiz
e:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 04:18:21.464901    4295 start.go:125] createHost starting for "" (driver="qemu2")
	I0925 04:18:21.473568    4295 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0925 04:18:21.520189    4295 start.go:159] libmachine.API.Create for "multinode-352000" (driver="qemu2")
	I0925 04:18:21.520233    4295 client.go:168] LocalClient.Create starting
	I0925 04:18:21.520337    4295 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/ca.pem
	I0925 04:18:21.520397    4295 main.go:141] libmachine: Decoding PEM data...
	I0925 04:18:21.520433    4295 main.go:141] libmachine: Parsing certificate...
	I0925 04:18:21.520509    4295 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/cert.pem
	I0925 04:18:21.520549    4295 main.go:141] libmachine: Decoding PEM data...
	I0925 04:18:21.520562    4295 main.go:141] libmachine: Parsing certificate...
	I0925 04:18:21.521077    4295 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17297-1010/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I0925 04:18:21.651085    4295 main.go:141] libmachine: Creating SSH key...
	I0925 04:18:21.752194    4295 main.go:141] libmachine: Creating Disk image...
	I0925 04:18:21.752201    4295 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0925 04:18:21.752333    4295 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/multinode-352000/disk.qcow2.raw /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/multinode-352000/disk.qcow2
	I0925 04:18:21.760941    4295 main.go:141] libmachine: STDOUT: 
	I0925 04:18:21.760955    4295 main.go:141] libmachine: STDERR: 
	I0925 04:18:21.761001    4295 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/multinode-352000/disk.qcow2 +20000M
	I0925 04:18:21.768159    4295 main.go:141] libmachine: STDOUT: Image resized.
	
	I0925 04:18:21.768172    4295 main.go:141] libmachine: STDERR: 
	I0925 04:18:21.768183    4295 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/multinode-352000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/multinode-352000/disk.qcow2
	I0925 04:18:21.768191    4295 main.go:141] libmachine: Starting QEMU VM...
	I0925 04:18:21.768231    4295 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/multinode-352000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/multinode-352000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/multinode-352000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:1a:a1:c9:28:58 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/multinode-352000/disk.qcow2
	I0925 04:18:21.769834    4295 main.go:141] libmachine: STDOUT: 
	I0925 04:18:21.769847    4295 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 04:18:21.769858    4295 client.go:171] LocalClient.Create took 249.61975ms
	I0925 04:18:23.772036    4295 start.go:128] duration metric: createHost completed in 2.307106334s
	I0925 04:18:23.772094    4295 start.go:83] releasing machines lock for "multinode-352000", held for 2.307583334s
	W0925 04:18:23.772445    4295 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-352000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-352000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 04:18:23.780152    4295 out.go:177] 
	W0925 04:18:23.784239    4295 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0925 04:18:23.784265    4295 out.go:239] * 
	* 
	W0925 04:18:23.786785    4295 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0925 04:18:23.796133    4295 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:87: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-352000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-352000 -n multinode-352000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-352000 -n multinode-352000: exit status 7 (64.354333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-352000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (9.83s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (88.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-352000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:481: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-352000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (117.316334ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-352000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:483: failed to create busybox deployment to multinode cluster
multinode_test.go:486: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-352000 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-352000 -- rollout status deployment/busybox: exit status 1 (55.894834ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-352000"

                                                
                                                
** /stderr **
multinode_test.go:488: failed to deploy busybox to multinode cluster
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-352000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-352000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (53.720417ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-352000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-352000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-352000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.752167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-352000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-352000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-352000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (100.963167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-352000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-352000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-352000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.413167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-352000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-352000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-352000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.335ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-352000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-352000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-352000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.765916ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-352000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-352000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-352000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (100.338917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-352000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-352000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-352000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (99.367167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-352000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-352000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-352000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (99.513375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-352000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
E0925 04:19:18.710906    1469 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/functional-742000/client.crt: no such file or directory
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-352000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-352000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (99.023625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-352000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
E0925 04:19:46.407003    1469 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/functional-742000/client.crt: no such file or directory
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-352000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-352000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.273375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-352000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:512: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:516: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-352000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:516: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-352000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (53.565833ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-352000"

                                                
                                                
** /stderr **
multinode_test.go:518: failed get Pod names
multinode_test.go:524: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-352000 -- exec  -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-352000 -- exec  -- nslookup kubernetes.io: exit status 1 (52.841417ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-352000"

                                                
                                                
** /stderr **
multinode_test.go:526: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-352000 -- exec  -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-352000 -- exec  -- nslookup kubernetes.default: exit status 1 (52.896ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-352000"

                                                
                                                
** /stderr **
multinode_test.go:536: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:542: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-352000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-352000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (52.726667ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-352000"

                                                
                                                
** /stderr **
multinode_test.go:544: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-352000 -n multinode-352000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-352000 -n multinode-352000: exit status 7 (27.460583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-352000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (88.32s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-352000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-352000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (52.421875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-352000"

                                                
                                                
** /stderr **
multinode_test.go:554: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-352000 -n multinode-352000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-352000 -n multinode-352000: exit status 7 (27.635459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-352000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-352000 -v 3 --alsologtostderr
multinode_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-352000 -v 3 --alsologtostderr: exit status 89 (38.634708ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p multinode-352000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 04:19:52.299131    4389 out.go:296] Setting OutFile to fd 1 ...
	I0925 04:19:52.299299    4389 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 04:19:52.299302    4389 out.go:309] Setting ErrFile to fd 2...
	I0925 04:19:52.299305    4389 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 04:19:52.299429    4389 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17297-1010/.minikube/bin
	I0925 04:19:52.299668    4389 mustload.go:65] Loading cluster: multinode-352000
	I0925 04:19:52.299851    4389 config.go:182] Loaded profile config "multinode-352000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0925 04:19:52.304819    4389 out.go:177] * The control plane node must be running for this command
	I0925 04:19:52.307963    4389 out.go:177]   To start a cluster, run: "minikube start -p multinode-352000"

                                                
                                                
** /stderr **
multinode_test.go:112: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-352000 -v 3 --alsologtostderr" : exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-352000 -n multinode-352000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-352000 -n multinode-352000: exit status 7 (27.745042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-352000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:155: expected profile "multinode-352000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-352000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-352000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"VMDriver\":\"\",\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidd
en\":false,\"KVMNUMACount\":1,\"APIServerPort\":0,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.28.2\",\"ClusterName\":\"multinode-352000\",\"Namespace\":\"default\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\",\"NodeIP\":\"\",\"NodePort\":8443,\"NodeName\":\"\"},\"Nodes\":[{\"Name\":\"\
",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.28.2\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPat
h\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"AutoPauseInterval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-352000 -n multinode-352000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-352000 -n multinode-352000: exit status 7 (27.810125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-352000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-352000 status --output json --alsologtostderr
multinode_test.go:173: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-352000 status --output json --alsologtostderr: exit status 7 (27.64925ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-352000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 04:19:52.462856    4399 out.go:296] Setting OutFile to fd 1 ...
	I0925 04:19:52.463019    4399 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 04:19:52.463022    4399 out.go:309] Setting ErrFile to fd 2...
	I0925 04:19:52.463024    4399 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 04:19:52.463157    4399 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17297-1010/.minikube/bin
	I0925 04:19:52.463294    4399 out.go:303] Setting JSON to true
	I0925 04:19:52.463309    4399 mustload.go:65] Loading cluster: multinode-352000
	I0925 04:19:52.463375    4399 notify.go:220] Checking for updates...
	I0925 04:19:52.463521    4399 config.go:182] Loaded profile config "multinode-352000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0925 04:19:52.463526    4399 status.go:255] checking status of multinode-352000 ...
	I0925 04:19:52.463728    4399 status.go:330] multinode-352000 host status = "Stopped" (err=<nil>)
	I0925 04:19:52.463732    4399 status.go:343] host is not running, skipping remaining checks
	I0925 04:19:52.463734    4399 status.go:257] multinode-352000 status: &{Name:multinode-352000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:180: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-352000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-352000 -n multinode-352000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-352000 -n multinode-352000: exit status 7 (26.96275ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-352000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-352000 node stop m03
multinode_test.go:210: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-352000 node stop m03: exit status 85 (44.489292ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:212: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-352000 node stop m03": exit status 85
multinode_test.go:216: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-352000 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-352000 status: exit status 7 (27.644666ms)

                                                
                                                
-- stdout --
	multinode-352000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-352000 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-352000 status --alsologtostderr: exit status 7 (27.67025ms)

                                                
                                                
-- stdout --
	multinode-352000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 04:19:52.590517    4407 out.go:296] Setting OutFile to fd 1 ...
	I0925 04:19:52.590683    4407 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 04:19:52.590685    4407 out.go:309] Setting ErrFile to fd 2...
	I0925 04:19:52.590688    4407 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 04:19:52.590828    4407 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17297-1010/.minikube/bin
	I0925 04:19:52.590965    4407 out.go:303] Setting JSON to false
	I0925 04:19:52.590979    4407 mustload.go:65] Loading cluster: multinode-352000
	I0925 04:19:52.591032    4407 notify.go:220] Checking for updates...
	I0925 04:19:52.591181    4407 config.go:182] Loaded profile config "multinode-352000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0925 04:19:52.591186    4407 status.go:255] checking status of multinode-352000 ...
	I0925 04:19:52.591393    4407 status.go:330] multinode-352000 host status = "Stopped" (err=<nil>)
	I0925 04:19:52.591396    4407 status.go:343] host is not running, skipping remaining checks
	I0925 04:19:52.591398    4407 status.go:257] multinode-352000 status: &{Name:multinode-352000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:229: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-352000 status --alsologtostderr": multinode-352000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-352000 -n multinode-352000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-352000 -n multinode-352000: exit status 7 (27.480667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-352000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.13s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-352000 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-352000 node start m03 --alsologtostderr: exit status 85 (43.566459ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 04:19:52.645711    4411 out.go:296] Setting OutFile to fd 1 ...
	I0925 04:19:52.645936    4411 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 04:19:52.645938    4411 out.go:309] Setting ErrFile to fd 2...
	I0925 04:19:52.645941    4411 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 04:19:52.646075    4411 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17297-1010/.minikube/bin
	I0925 04:19:52.646318    4411 mustload.go:65] Loading cluster: multinode-352000
	I0925 04:19:52.646509    4411 config.go:182] Loaded profile config "multinode-352000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0925 04:19:52.650681    4411 out.go:177] 
	W0925 04:19:52.654814    4411 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0925 04:19:52.654824    4411 out.go:239] * 
	* 
	W0925 04:19:52.656517    4411 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0925 04:19:52.659729    4411 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:256: I0925 04:19:52.645711    4411 out.go:296] Setting OutFile to fd 1 ...
I0925 04:19:52.645936    4411 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0925 04:19:52.645938    4411 out.go:309] Setting ErrFile to fd 2...
I0925 04:19:52.645941    4411 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0925 04:19:52.646075    4411 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17297-1010/.minikube/bin
I0925 04:19:52.646318    4411 mustload.go:65] Loading cluster: multinode-352000
I0925 04:19:52.646509    4411 config.go:182] Loaded profile config "multinode-352000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I0925 04:19:52.650681    4411 out.go:177] 
W0925 04:19:52.654814    4411 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0925 04:19:52.654824    4411 out.go:239] * 
* 
W0925 04:19:52.656517    4411 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0925 04:19:52.659729    4411 out.go:177] 
multinode_test.go:257: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-352000 node start m03 --alsologtostderr": exit status 85
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-352000 status
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-352000 status: exit status 7 (26.903459ms)

                                                
                                                
-- stdout --
	multinode-352000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:263: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-352000 status" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-352000 -n multinode-352000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-352000 -n multinode-352000: exit status 7 (27.956792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-352000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (5.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-352000
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-352000
multinode_test.go:295: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-352000 --wait=true -v=8 --alsologtostderr
multinode_test.go:295: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-352000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.174153959s)

                                                
                                                
-- stdout --
	* [multinode-352000] minikube v1.31.2 on Darwin 13.6 (arm64)
	  - MINIKUBE_LOCATION=17297
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17297-1010/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17297-1010/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node multinode-352000 in cluster multinode-352000
	* Restarting existing qemu2 VM for "multinode-352000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-352000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 04:19:52.829602    4421 out.go:296] Setting OutFile to fd 1 ...
	I0925 04:19:52.829717    4421 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 04:19:52.829720    4421 out.go:309] Setting ErrFile to fd 2...
	I0925 04:19:52.829723    4421 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 04:19:52.829856    4421 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17297-1010/.minikube/bin
	I0925 04:19:52.830862    4421 out.go:303] Setting JSON to false
	I0925 04:19:52.845971    4421 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2967,"bootTime":1695637825,"procs":411,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.6","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0925 04:19:52.846054    4421 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0925 04:19:52.849793    4421 out.go:177] * [multinode-352000] minikube v1.31.2 on Darwin 13.6 (arm64)
	I0925 04:19:52.856878    4421 out.go:177]   - MINIKUBE_LOCATION=17297
	I0925 04:19:52.856969    4421 notify.go:220] Checking for updates...
	I0925 04:19:52.860767    4421 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17297-1010/kubeconfig
	I0925 04:19:52.863850    4421 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0925 04:19:52.866826    4421 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 04:19:52.869729    4421 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17297-1010/.minikube
	I0925 04:19:52.872768    4421 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0925 04:19:52.876047    4421 config.go:182] Loaded profile config "multinode-352000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0925 04:19:52.876095    4421 driver.go:373] Setting default libvirt URI to qemu:///system
	I0925 04:19:52.880745    4421 out.go:177] * Using the qemu2 driver based on existing profile
	I0925 04:19:52.887751    4421 start.go:298] selected driver: qemu2
	I0925 04:19:52.887757    4421 start.go:902] validating driver "qemu2" against &{Name:multinode-352000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.2 ClusterName:multinode-352000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0925 04:19:52.887798    4421 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0925 04:19:52.889765    4421 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0925 04:19:52.889797    4421 cni.go:84] Creating CNI manager for ""
	I0925 04:19:52.889801    4421 cni.go:136] 1 nodes found, recommending kindnet
	I0925 04:19:52.889804    4421 start_flags.go:321] config:
	{Name:multinode-352000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:multinode-352000 Namespace:default API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0925 04:19:52.893862    4421 iso.go:125] acquiring lock: {Name:mkf881a60cf9fd1672567914305ff6f7a4f13809 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 04:19:52.900720    4421 out.go:177] * Starting control plane node multinode-352000 in cluster multinode-352000
	I0925 04:19:52.904609    4421 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0925 04:19:52.904627    4421 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I0925 04:19:52.904644    4421 cache.go:57] Caching tarball of preloaded images
	I0925 04:19:52.904706    4421 preload.go:174] Found /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0925 04:19:52.904711    4421 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I0925 04:19:52.904776    4421 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/multinode-352000/config.json ...
	I0925 04:19:52.905139    4421 start.go:365] acquiring machines lock for multinode-352000: {Name:mk918d99818e2cf21e5912bc291ed18d4b442ba6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 04:19:52.905173    4421 start.go:369] acquired machines lock for "multinode-352000" in 28.333µs
	I0925 04:19:52.905187    4421 start.go:96] Skipping create...Using existing machine configuration
	I0925 04:19:52.905193    4421 fix.go:54] fixHost starting: 
	I0925 04:19:52.905312    4421 fix.go:102] recreateIfNeeded on multinode-352000: state=Stopped err=<nil>
	W0925 04:19:52.905320    4421 fix.go:128] unexpected machine state, will restart: <nil>
	I0925 04:19:52.913766    4421 out.go:177] * Restarting existing qemu2 VM for "multinode-352000" ...
	I0925 04:19:52.917762    4421 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/multinode-352000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/multinode-352000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/multinode-352000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:1a:a1:c9:28:58 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/multinode-352000/disk.qcow2
	I0925 04:19:52.919681    4421 main.go:141] libmachine: STDOUT: 
	I0925 04:19:52.919701    4421 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 04:19:52.919735    4421 fix.go:56] fixHost completed within 14.542167ms
	I0925 04:19:52.919741    4421 start.go:83] releasing machines lock for "multinode-352000", held for 14.563375ms
	W0925 04:19:52.919746    4421 start.go:688] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0925 04:19:52.919783    4421 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 04:19:52.919787    4421 start.go:703] Will try again in 5 seconds ...
	I0925 04:19:57.921989    4421 start.go:365] acquiring machines lock for multinode-352000: {Name:mk918d99818e2cf21e5912bc291ed18d4b442ba6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 04:19:57.922352    4421 start.go:369] acquired machines lock for "multinode-352000" in 278.375µs
	I0925 04:19:57.922494    4421 start.go:96] Skipping create...Using existing machine configuration
	I0925 04:19:57.922512    4421 fix.go:54] fixHost starting: 
	I0925 04:19:57.923181    4421 fix.go:102] recreateIfNeeded on multinode-352000: state=Stopped err=<nil>
	W0925 04:19:57.923212    4421 fix.go:128] unexpected machine state, will restart: <nil>
	I0925 04:19:57.931505    4421 out.go:177] * Restarting existing qemu2 VM for "multinode-352000" ...
	I0925 04:19:57.935731    4421 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/multinode-352000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/multinode-352000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/multinode-352000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:1a:a1:c9:28:58 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/multinode-352000/disk.qcow2
	I0925 04:19:57.943839    4421 main.go:141] libmachine: STDOUT: 
	I0925 04:19:57.943907    4421 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 04:19:57.943985    4421 fix.go:56] fixHost completed within 21.47275ms
	I0925 04:19:57.944004    4421 start.go:83] releasing machines lock for "multinode-352000", held for 21.630375ms
	W0925 04:19:57.944202    4421 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-352000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-352000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 04:19:57.951567    4421 out.go:177] 
	W0925 04:19:57.955674    4421 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0925 04:19:57.955720    4421 out.go:239] * 
	* 
	W0925 04:19:57.958155    4421 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0925 04:19:57.965378    4421 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:297: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-352000" : exit status 80
multinode_test.go:300: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-352000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-352000 -n multinode-352000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-352000 -n multinode-352000: exit status 7 (30.771792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-352000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (5.36s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-352000 node delete m03
multinode_test.go:394: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-352000 node delete m03: exit status 89 (37.439833ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p multinode-352000"

                                                
                                                
-- /stdout --
multinode_test.go:396: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-352000 node delete m03": exit status 89
multinode_test.go:400: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-352000 status --alsologtostderr
multinode_test.go:400: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-352000 status --alsologtostderr: exit status 7 (27.72675ms)

                                                
                                                
-- stdout --
	multinode-352000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 04:19:58.138786    4435 out.go:296] Setting OutFile to fd 1 ...
	I0925 04:19:58.138941    4435 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 04:19:58.138944    4435 out.go:309] Setting ErrFile to fd 2...
	I0925 04:19:58.138946    4435 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 04:19:58.139078    4435 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17297-1010/.minikube/bin
	I0925 04:19:58.139225    4435 out.go:303] Setting JSON to false
	I0925 04:19:58.139236    4435 mustload.go:65] Loading cluster: multinode-352000
	I0925 04:19:58.139288    4435 notify.go:220] Checking for updates...
	I0925 04:19:58.139451    4435 config.go:182] Loaded profile config "multinode-352000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0925 04:19:58.139455    4435 status.go:255] checking status of multinode-352000 ...
	I0925 04:19:58.139668    4435 status.go:330] multinode-352000 host status = "Stopped" (err=<nil>)
	I0925 04:19:58.139671    4435 status.go:343] host is not running, skipping remaining checks
	I0925 04:19:58.139674    4435 status.go:257] multinode-352000 status: &{Name:multinode-352000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:402: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-352000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-352000 -n multinode-352000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-352000 -n multinode-352000: exit status 7 (27.523875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-352000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (0.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-352000 stop
multinode_test.go:320: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-352000 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-352000 status: exit status 7 (28.115792ms)

                                                
                                                
-- stdout --
	multinode-352000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-352000 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-352000 status --alsologtostderr: exit status 7 (28.008709ms)

                                                
                                                
-- stdout --
	multinode-352000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 04:19:58.282560    4443 out.go:296] Setting OutFile to fd 1 ...
	I0925 04:19:58.282706    4443 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 04:19:58.282709    4443 out.go:309] Setting ErrFile to fd 2...
	I0925 04:19:58.282711    4443 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 04:19:58.282826    4443 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17297-1010/.minikube/bin
	I0925 04:19:58.282952    4443 out.go:303] Setting JSON to false
	I0925 04:19:58.282966    4443 mustload.go:65] Loading cluster: multinode-352000
	I0925 04:19:58.283014    4443 notify.go:220] Checking for updates...
	I0925 04:19:58.283162    4443 config.go:182] Loaded profile config "multinode-352000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0925 04:19:58.283167    4443 status.go:255] checking status of multinode-352000 ...
	I0925 04:19:58.283368    4443 status.go:330] multinode-352000 host status = "Stopped" (err=<nil>)
	I0925 04:19:58.283371    4443 status.go:343] host is not running, skipping remaining checks
	I0925 04:19:58.283373    4443 status.go:257] multinode-352000 status: &{Name:multinode-352000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:333: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-352000 status --alsologtostderr": multinode-352000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:337: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-352000 status --alsologtostderr": multinode-352000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-352000 -n multinode-352000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-352000 -n multinode-352000: exit status 7 (27.081083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-352000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (0.14s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:354: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-352000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:354: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-352000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.175873958s)

                                                
                                                
-- stdout --
	* [multinode-352000] minikube v1.31.2 on Darwin 13.6 (arm64)
	  - MINIKUBE_LOCATION=17297
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17297-1010/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17297-1010/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node multinode-352000 in cluster multinode-352000
	* Restarting existing qemu2 VM for "multinode-352000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-352000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 04:19:58.336719    4447 out.go:296] Setting OutFile to fd 1 ...
	I0925 04:19:58.336845    4447 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 04:19:58.336848    4447 out.go:309] Setting ErrFile to fd 2...
	I0925 04:19:58.336851    4447 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 04:19:58.336977    4447 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17297-1010/.minikube/bin
	I0925 04:19:58.337919    4447 out.go:303] Setting JSON to false
	I0925 04:19:58.353035    4447 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2973,"bootTime":1695637825,"procs":411,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.6","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0925 04:19:58.353113    4447 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0925 04:19:58.357988    4447 out.go:177] * [multinode-352000] minikube v1.31.2 on Darwin 13.6 (arm64)
	I0925 04:19:58.363959    4447 out.go:177]   - MINIKUBE_LOCATION=17297
	I0925 04:19:58.364032    4447 notify.go:220] Checking for updates...
	I0925 04:19:58.367999    4447 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17297-1010/kubeconfig
	I0925 04:19:58.371104    4447 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0925 04:19:58.373984    4447 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 04:19:58.376954    4447 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17297-1010/.minikube
	I0925 04:19:58.380029    4447 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0925 04:19:58.383202    4447 config.go:182] Loaded profile config "multinode-352000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0925 04:19:58.383446    4447 driver.go:373] Setting default libvirt URI to qemu:///system
	I0925 04:19:58.388032    4447 out.go:177] * Using the qemu2 driver based on existing profile
	I0925 04:19:58.395009    4447 start.go:298] selected driver: qemu2
	I0925 04:19:58.395017    4447 start.go:902] validating driver "qemu2" against &{Name:multinode-352000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.2 ClusterName:multinode-352000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0925 04:19:58.395076    4447 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0925 04:19:58.397035    4447 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0925 04:19:58.397056    4447 cni.go:84] Creating CNI manager for ""
	I0925 04:19:58.397061    4447 cni.go:136] 1 nodes found, recommending kindnet
	I0925 04:19:58.397069    4447 start_flags.go:321] config:
	{Name:multinode-352000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:multinode-352000 Namespace:default API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0925 04:19:58.400920    4447 iso.go:125] acquiring lock: {Name:mkf881a60cf9fd1672567914305ff6f7a4f13809 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 04:19:58.408020    4447 out.go:177] * Starting control plane node multinode-352000 in cluster multinode-352000
	I0925 04:19:58.411985    4447 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0925 04:19:58.412012    4447 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I0925 04:19:58.412023    4447 cache.go:57] Caching tarball of preloaded images
	I0925 04:19:58.412084    4447 preload.go:174] Found /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0925 04:19:58.412089    4447 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I0925 04:19:58.412142    4447 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/multinode-352000/config.json ...
	I0925 04:19:58.412496    4447 start.go:365] acquiring machines lock for multinode-352000: {Name:mk918d99818e2cf21e5912bc291ed18d4b442ba6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 04:19:58.412526    4447 start.go:369] acquired machines lock for "multinode-352000" in 24.417µs
	I0925 04:19:58.412535    4447 start.go:96] Skipping create...Using existing machine configuration
	I0925 04:19:58.412540    4447 fix.go:54] fixHost starting: 
	I0925 04:19:58.412650    4447 fix.go:102] recreateIfNeeded on multinode-352000: state=Stopped err=<nil>
	W0925 04:19:58.412658    4447 fix.go:128] unexpected machine state, will restart: <nil>
	I0925 04:19:58.421130    4447 out.go:177] * Restarting existing qemu2 VM for "multinode-352000" ...
	I0925 04:19:58.425007    4447 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/multinode-352000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/multinode-352000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/multinode-352000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:1a:a1:c9:28:58 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/multinode-352000/disk.qcow2
	I0925 04:19:58.426691    4447 main.go:141] libmachine: STDOUT: 
	I0925 04:19:58.426705    4447 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 04:19:58.426731    4447 fix.go:56] fixHost completed within 14.189667ms
	I0925 04:19:58.426735    4447 start.go:83] releasing machines lock for "multinode-352000", held for 14.205291ms
	W0925 04:19:58.426742    4447 start.go:688] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0925 04:19:58.426797    4447 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 04:19:58.426808    4447 start.go:703] Will try again in 5 seconds ...
	I0925 04:20:03.429047    4447 start.go:365] acquiring machines lock for multinode-352000: {Name:mk918d99818e2cf21e5912bc291ed18d4b442ba6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 04:20:03.429447    4447 start.go:369] acquired machines lock for "multinode-352000" in 327.042µs
	I0925 04:20:03.429597    4447 start.go:96] Skipping create...Using existing machine configuration
	I0925 04:20:03.429618    4447 fix.go:54] fixHost starting: 
	I0925 04:20:03.430351    4447 fix.go:102] recreateIfNeeded on multinode-352000: state=Stopped err=<nil>
	W0925 04:20:03.430378    4447 fix.go:128] unexpected machine state, will restart: <nil>
	I0925 04:20:03.434765    4447 out.go:177] * Restarting existing qemu2 VM for "multinode-352000" ...
	I0925 04:20:03.443020    4447 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/multinode-352000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/multinode-352000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/multinode-352000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:1a:a1:c9:28:58 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/multinode-352000/disk.qcow2
	I0925 04:20:03.452021    4447 main.go:141] libmachine: STDOUT: 
	I0925 04:20:03.452070    4447 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 04:20:03.452190    4447 fix.go:56] fixHost completed within 22.569125ms
	I0925 04:20:03.452209    4447 start.go:83] releasing machines lock for "multinode-352000", held for 22.740208ms
	W0925 04:20:03.452366    4447 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-352000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-352000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 04:20:03.459701    4447 out.go:177] 
	W0925 04:20:03.463831    4447 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0925 04:20:03.463878    4447 out.go:239] * 
	* 
	W0925 04:20:03.466572    4447 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0925 04:20:03.474735    4447 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:356: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-352000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-352000 -n multinode-352000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-352000 -n multinode-352000: exit status 7 (68.422208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-352000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (19.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-352000
multinode_test.go:452: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-352000-m01 --driver=qemu2 
E0925 04:20:11.254092    1469 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/ingress-addon-legacy-907000/client.crt: no such file or directory
E0925 04:20:11.260168    1469 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/ingress-addon-legacy-907000/client.crt: no such file or directory
E0925 04:20:11.272259    1469 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/ingress-addon-legacy-907000/client.crt: no such file or directory
E0925 04:20:11.294355    1469 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/ingress-addon-legacy-907000/client.crt: no such file or directory
E0925 04:20:11.336431    1469 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/ingress-addon-legacy-907000/client.crt: no such file or directory
E0925 04:20:11.418504    1469 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/ingress-addon-legacy-907000/client.crt: no such file or directory
E0925 04:20:11.580666    1469 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/ingress-addon-legacy-907000/client.crt: no such file or directory
E0925 04:20:11.901826    1469 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/ingress-addon-legacy-907000/client.crt: no such file or directory
E0925 04:20:12.544217    1469 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/ingress-addon-legacy-907000/client.crt: no such file or directory
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-352000-m01 --driver=qemu2 : exit status 80 (9.7098165s)

                                                
                                                
-- stdout --
	* [multinode-352000-m01] minikube v1.31.2 on Darwin 13.6 (arm64)
	  - MINIKUBE_LOCATION=17297
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17297-1010/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17297-1010/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node multinode-352000-m01 in cluster multinode-352000-m01
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-352000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-352000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-352000-m02 --driver=qemu2 
E0925 04:20:13.826657    1469 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/ingress-addon-legacy-907000/client.crt: no such file or directory
E0925 04:20:16.389189    1469 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/ingress-addon-legacy-907000/client.crt: no such file or directory
E0925 04:20:21.511668    1469 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/ingress-addon-legacy-907000/client.crt: no such file or directory
multinode_test.go:460: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-352000-m02 --driver=qemu2 : exit status 80 (9.743533584s)

                                                
                                                
-- stdout --
	* [multinode-352000-m02] minikube v1.31.2 on Darwin 13.6 (arm64)
	  - MINIKUBE_LOCATION=17297
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17297-1010/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17297-1010/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node multinode-352000-m02 in cluster multinode-352000-m02
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-352000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-352000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:462: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-352000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-352000
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-352000: exit status 89 (77.998792ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p multinode-352000"

                                                
                                                
-- /stdout --
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-352000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-352000 -n multinode-352000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-352000 -n multinode-352000: exit status 7 (28.547833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-352000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (19.69s)

                                                
                                    
x
+
TestPreload (9.91s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-652000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
E0925 04:20:26.839830    1469 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/client.crt: no such file or directory
E0925 04:20:31.754209    1469 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/ingress-addon-legacy-907000/client.crt: no such file or directory
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-652000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.750137916s)

                                                
                                                
-- stdout --
	* [test-preload-652000] minikube v1.31.2 on Darwin 13.6 (arm64)
	  - MINIKUBE_LOCATION=17297
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17297-1010/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17297-1010/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node test-preload-652000 in cluster test-preload-652000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-652000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 04:20:23.393508    4505 out.go:296] Setting OutFile to fd 1 ...
	I0925 04:20:23.393619    4505 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 04:20:23.393622    4505 out.go:309] Setting ErrFile to fd 2...
	I0925 04:20:23.393624    4505 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 04:20:23.393749    4505 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17297-1010/.minikube/bin
	I0925 04:20:23.394792    4505 out.go:303] Setting JSON to false
	I0925 04:20:23.410203    4505 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2998,"bootTime":1695637825,"procs":411,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.6","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0925 04:20:23.410289    4505 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0925 04:20:23.414926    4505 out.go:177] * [test-preload-652000] minikube v1.31.2 on Darwin 13.6 (arm64)
	I0925 04:20:23.422802    4505 out.go:177]   - MINIKUBE_LOCATION=17297
	I0925 04:20:23.426802    4505 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17297-1010/kubeconfig
	I0925 04:20:23.422889    4505 notify.go:220] Checking for updates...
	I0925 04:20:23.430743    4505 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0925 04:20:23.433806    4505 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 04:20:23.436794    4505 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17297-1010/.minikube
	I0925 04:20:23.440267    4505 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0925 04:20:23.444137    4505 config.go:182] Loaded profile config "multinode-352000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0925 04:20:23.444181    4505 driver.go:373] Setting default libvirt URI to qemu:///system
	I0925 04:20:23.448786    4505 out.go:177] * Using the qemu2 driver based on user configuration
	I0925 04:20:23.455816    4505 start.go:298] selected driver: qemu2
	I0925 04:20:23.455825    4505 start.go:902] validating driver "qemu2" against <nil>
	I0925 04:20:23.455831    4505 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0925 04:20:23.457862    4505 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0925 04:20:23.460779    4505 out.go:177] * Automatically selected the socket_vmnet network
	I0925 04:20:23.463859    4505 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0925 04:20:23.463888    4505 cni.go:84] Creating CNI manager for ""
	I0925 04:20:23.463895    4505 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0925 04:20:23.463899    4505 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0925 04:20:23.463905    4505 start_flags.go:321] config:
	{Name:test-preload-652000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-652000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock:
SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0925 04:20:23.468176    4505 iso.go:125] acquiring lock: {Name:mkf881a60cf9fd1672567914305ff6f7a4f13809 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 04:20:23.475644    4505 out.go:177] * Starting control plane node test-preload-652000 in cluster test-preload-652000
	I0925 04:20:23.479739    4505 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0925 04:20:23.479817    4505 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/test-preload-652000/config.json ...
	I0925 04:20:23.479838    4505 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/test-preload-652000/config.json: {Name:mk1ed0686ea8e9b7344500a249ac6cc220020f75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 04:20:23.479838    4505 cache.go:107] acquiring lock: {Name:mkabf7fabdeaff7e666ac8f9deef5b56be85207e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 04:20:23.479845    4505 cache.go:107] acquiring lock: {Name:mk9821ee6b2f6d7e4429c094f78b0c6b12b1b2c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 04:20:23.479868    4505 cache.go:107] acquiring lock: {Name:mked62974e1f064b7375a36374094036d20b6229 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 04:20:23.479838    4505 cache.go:107] acquiring lock: {Name:mk22409111e13c92863c1a6a2027dfffd517746f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 04:20:23.480053    4505 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0925 04:20:23.480082    4505 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0925 04:20:23.480100    4505 start.go:365] acquiring machines lock for test-preload-652000: {Name:mk918d99818e2cf21e5912bc291ed18d4b442ba6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 04:20:23.480137    4505 start.go:369] acquired machines lock for "test-preload-652000" in 29.625µs
	I0925 04:20:23.480136    4505 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0925 04:20:23.480150    4505 cache.go:107] acquiring lock: {Name:mk75c8ed027d2497f48e57e4b645b406e36abf09 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 04:20:23.480149    4505 start.go:93] Provisioning new machine with config: &{Name:test-preload-652000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-652000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mo
untMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 04:20:23.480173    4505 start.go:125] createHost starting for "" (driver="qemu2")
	I0925 04:20:23.483793    4505 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0925 04:20:23.480139    4505 cache.go:107] acquiring lock: {Name:mk823378ac59f4ad3b07b8933bd7368caea6632f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 04:20:23.480203    4505 cache.go:107] acquiring lock: {Name:mk9103af46e6e0aae255d1e821d79a82daf7aef5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 04:20:23.480302    4505 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0925 04:20:23.480304    4505 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0925 04:20:23.480322    4505 cache.go:107] acquiring lock: {Name:mk9c2039c8adc9ed0e80926b2201dde69395df22 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 04:20:23.484453    4505 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0925 04:20:23.484481    4505 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0925 04:20:23.484537    4505 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0925 04:20:23.497439    4505 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0925 04:20:23.499301    4505 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0925 04:20:23.499367    4505 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0925 04:20:23.499408    4505 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0925 04:20:23.499444    4505 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0925 04:20:23.499467    4505 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0925 04:20:23.499489    4505 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0925 04:20:23.499859    4505 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0925 04:20:23.500513    4505 start.go:159] libmachine.API.Create for "test-preload-652000" (driver="qemu2")
	I0925 04:20:23.500533    4505 client.go:168] LocalClient.Create starting
	I0925 04:20:23.500589    4505 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/ca.pem
	I0925 04:20:23.500613    4505 main.go:141] libmachine: Decoding PEM data...
	I0925 04:20:23.500624    4505 main.go:141] libmachine: Parsing certificate...
	I0925 04:20:23.500662    4505 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/cert.pem
	I0925 04:20:23.500680    4505 main.go:141] libmachine: Decoding PEM data...
	I0925 04:20:23.500687    4505 main.go:141] libmachine: Parsing certificate...
	I0925 04:20:23.500988    4505 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17297-1010/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I0925 04:20:23.685520    4505 main.go:141] libmachine: Creating SSH key...
	I0925 04:20:23.764981    4505 main.go:141] libmachine: Creating Disk image...
	I0925 04:20:23.764999    4505 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0925 04:20:23.765223    4505 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/test-preload-652000/disk.qcow2.raw /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/test-preload-652000/disk.qcow2
	I0925 04:20:23.773742    4505 main.go:141] libmachine: STDOUT: 
	I0925 04:20:23.773759    4505 main.go:141] libmachine: STDERR: 
	I0925 04:20:23.773813    4505 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/test-preload-652000/disk.qcow2 +20000M
	I0925 04:20:23.781259    4505 main.go:141] libmachine: STDOUT: Image resized.
	
	I0925 04:20:23.781281    4505 main.go:141] libmachine: STDERR: 
	I0925 04:20:23.781302    4505 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/test-preload-652000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/test-preload-652000/disk.qcow2
	I0925 04:20:23.781310    4505 main.go:141] libmachine: Starting QEMU VM...
	I0925 04:20:23.781355    4505 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/test-preload-652000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/test-preload-652000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/test-preload-652000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:8c:41:49:61:37 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/test-preload-652000/disk.qcow2
	I0925 04:20:23.782950    4505 main.go:141] libmachine: STDOUT: 
	I0925 04:20:23.782963    4505 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 04:20:23.782986    4505 client.go:171] LocalClient.Create took 282.446958ms
	I0925 04:20:24.181017    4505 cache.go:162] opening:  /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0925 04:20:24.268076    4505 cache.go:162] opening:  /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	W0925 04:20:24.442720    4505 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0925 04:20:24.442756    4505 cache.go:162] opening:  /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0925 04:20:24.650890    4505 cache.go:162] opening:  /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0925 04:20:24.787315    4505 cache.go:157] /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0925 04:20:24.787335    4505 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/17297-1010/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 1.307465667s
	I0925 04:20:24.787346    4505 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	I0925 04:20:24.866898    4505 cache.go:162] opening:  /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0925 04:20:25.087675    4505 cache.go:162] opening:  /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0925 04:20:25.295646    4505 cache.go:162] opening:  /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I0925 04:20:25.783263    4505 start.go:128] duration metric: createHost completed in 2.303064042s
	I0925 04:20:25.783312    4505 start.go:83] releasing machines lock for "test-preload-652000", held for 2.303165125s
	W0925 04:20:25.783369    4505 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 04:20:25.795503    4505 out.go:177] * Deleting "test-preload-652000" in qemu2 ...
	W0925 04:20:25.816589    4505 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 04:20:25.816626    4505 start.go:703] Will try again in 5 seconds ...
	W0925 04:20:25.839553    4505 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0925 04:20:25.839641    4505 cache.go:162] opening:  /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0925 04:20:26.045588    4505 cache.go:157] /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0925 04:20:26.045645    4505 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/17297-1010/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 2.565800208s
	I0925 04:20:26.045673    4505 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0925 04:20:26.861794    4505 cache.go:157] /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0925 04:20:26.861845    4505 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/17297-1010/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 3.381686125s
	I0925 04:20:26.861882    4505 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0925 04:20:27.811040    4505 cache.go:157] /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0925 04:20:27.811093    4505 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/17297-1010/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 4.330975791s
	I0925 04:20:27.811123    4505 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0925 04:20:28.359528    4505 cache.go:157] /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0925 04:20:28.359576    4505 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/17297-1010/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 4.87973075s
	I0925 04:20:28.359616    4505 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0925 04:20:29.135193    4505 cache.go:157] /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0925 04:20:29.135241    4505 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/17297-1010/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 5.65496425s
	I0925 04:20:29.135302    4505 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0925 04:20:29.428413    4505 cache.go:157] /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0925 04:20:29.428475    4505 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/17297-1010/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 5.948632333s
	I0925 04:20:29.428519    4505 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0925 04:20:30.816838    4505 start.go:365] acquiring machines lock for test-preload-652000: {Name:mk918d99818e2cf21e5912bc291ed18d4b442ba6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 04:20:30.817297    4505 start.go:369] acquired machines lock for "test-preload-652000" in 392.042µs
	I0925 04:20:30.817403    4505 start.go:93] Provisioning new machine with config: &{Name:test-preload-652000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-652000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mo
untMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 04:20:30.817665    4505 start.go:125] createHost starting for "" (driver="qemu2")
	I0925 04:20:30.826309    4505 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0925 04:20:30.871252    4505 start.go:159] libmachine.API.Create for "test-preload-652000" (driver="qemu2")
	I0925 04:20:30.871303    4505 client.go:168] LocalClient.Create starting
	I0925 04:20:30.871416    4505 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/ca.pem
	I0925 04:20:30.871471    4505 main.go:141] libmachine: Decoding PEM data...
	I0925 04:20:30.871494    4505 main.go:141] libmachine: Parsing certificate...
	I0925 04:20:30.871566    4505 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/cert.pem
	I0925 04:20:30.871600    4505 main.go:141] libmachine: Decoding PEM data...
	I0925 04:20:30.871615    4505 main.go:141] libmachine: Parsing certificate...
	I0925 04:20:30.872122    4505 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17297-1010/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I0925 04:20:31.002406    4505 main.go:141] libmachine: Creating SSH key...
	I0925 04:20:31.062498    4505 main.go:141] libmachine: Creating Disk image...
	I0925 04:20:31.062503    4505 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0925 04:20:31.062632    4505 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/test-preload-652000/disk.qcow2.raw /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/test-preload-652000/disk.qcow2
	I0925 04:20:31.071602    4505 main.go:141] libmachine: STDOUT: 
	I0925 04:20:31.071614    4505 main.go:141] libmachine: STDERR: 
	I0925 04:20:31.071673    4505 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/test-preload-652000/disk.qcow2 +20000M
	I0925 04:20:31.079048    4505 main.go:141] libmachine: STDOUT: Image resized.
	
	I0925 04:20:31.079062    4505 main.go:141] libmachine: STDERR: 
	I0925 04:20:31.079076    4505 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/test-preload-652000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/test-preload-652000/disk.qcow2
	I0925 04:20:31.079088    4505 main.go:141] libmachine: Starting QEMU VM...
	I0925 04:20:31.079131    4505 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/test-preload-652000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/test-preload-652000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/test-preload-652000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:15:8a:4b:1a:99 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/test-preload-652000/disk.qcow2
	I0925 04:20:31.080786    4505 main.go:141] libmachine: STDOUT: 
	I0925 04:20:31.080798    4505 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 04:20:31.080815    4505 client.go:171] LocalClient.Create took 209.507167ms
	I0925 04:20:33.081297    4505 start.go:128] duration metric: createHost completed in 2.263569666s
	I0925 04:20:33.081365    4505 start.go:83] releasing machines lock for "test-preload-652000", held for 2.264043542s
	W0925 04:20:33.081615    4505 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-652000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-652000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 04:20:33.089144    4505 out.go:177] 
	W0925 04:20:33.093103    4505 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0925 04:20:33.093146    4505 out.go:239] * 
	* 
	W0925 04:20:33.095837    4505 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0925 04:20:33.103903    4505 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-652000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:523: *** TestPreload FAILED at 2023-09-25 04:20:33.120575 -0700 PDT m=+2832.996377001
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-652000 -n test-preload-652000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-652000 -n test-preload-652000: exit status 7 (62.962417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-652000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-652000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-652000
--- FAIL: TestPreload (9.91s)

                                                
                                    
x
+
TestScheduledStopUnix (9.94s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-531000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-531000 --memory=2048 --driver=qemu2 : exit status 80 (9.777723375s)

                                                
                                                
-- stdout --
	* [scheduled-stop-531000] minikube v1.31.2 on Darwin 13.6 (arm64)
	  - MINIKUBE_LOCATION=17297
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17297-1010/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17297-1010/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node scheduled-stop-531000 in cluster scheduled-stop-531000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-531000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-531000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-531000] minikube v1.31.2 on Darwin 13.6 (arm64)
	  - MINIKUBE_LOCATION=17297
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17297-1010/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17297-1010/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node scheduled-stop-531000 in cluster scheduled-stop-531000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-531000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-531000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:523: *** TestScheduledStopUnix FAILED at 2023-09-25 04:20:43.059925 -0700 PDT m=+2842.935719543
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-531000 -n scheduled-stop-531000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-531000 -n scheduled-stop-531000: exit status 7 (64.599875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-531000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-531000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-531000
--- FAIL: TestScheduledStopUnix (9.94s)

                                                
                                    
x
+
TestSkaffold (11.8s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe1959867447 version
skaffold_test.go:63: skaffold version: v2.7.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-538000 --memory=2600 --driver=qemu2 
E0925 04:20:52.236861    1469 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/ingress-addon-legacy-907000/client.crt: no such file or directory
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-538000 --memory=2600 --driver=qemu2 : exit status 80 (9.779373166s)

                                                
                                                
-- stdout --
	* [skaffold-538000] minikube v1.31.2 on Darwin 13.6 (arm64)
	  - MINIKUBE_LOCATION=17297
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17297-1010/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17297-1010/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node skaffold-538000 in cluster skaffold-538000
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-538000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-538000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-538000] minikube v1.31.2 on Darwin 13.6 (arm64)
	  - MINIKUBE_LOCATION=17297
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17297-1010/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17297-1010/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node skaffold-538000 in cluster skaffold-538000
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-538000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-538000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:523: *** TestSkaffold FAILED at 2023-09-25 04:20:54.86132 -0700 PDT m=+2854.737105501
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-538000 -n skaffold-538000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-538000 -n skaffold-538000: exit status 7 (61.092542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-538000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-538000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-538000
--- FAIL: TestSkaffold (11.80s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (126.3s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
E0925 04:22:55.120137    1469 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/ingress-addon-legacy-907000/client.crt: no such file or directory
version_upgrade_test.go:107: v1.6.2 release installation failed: bad response code: 404
panic.go:523: *** TestRunningBinaryUpgrade FAILED at 2023-09-25 04:23:41.023065 -0700 PDT m=+3020.898718210
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-878000 -n running-upgrade-878000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-878000 -n running-upgrade-878000: exit status 85 (84.241583ms)

                                                
                                                
-- stdout --
	* Profile "running-upgrade-878000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p running-upgrade-878000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "running-upgrade-878000" host is not running, skipping log retrieval (state="* Profile \"running-upgrade-878000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p running-upgrade-878000\"")
helpers_test.go:175: Cleaning up "running-upgrade-878000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-878000
--- FAIL: TestRunningBinaryUpgrade (126.30s)

                                                
                                    
x
+
TestKubernetesUpgrade (15.32s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-766000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:235: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-766000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.803858792s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-766000] minikube v1.31.2 on Darwin 13.6 (arm64)
	  - MINIKUBE_LOCATION=17297
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17297-1010/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17297-1010/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node kubernetes-upgrade-766000 in cluster kubernetes-upgrade-766000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-766000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 04:23:41.376124    5021 out.go:296] Setting OutFile to fd 1 ...
	I0925 04:23:41.376245    5021 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 04:23:41.376248    5021 out.go:309] Setting ErrFile to fd 2...
	I0925 04:23:41.376251    5021 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 04:23:41.376380    5021 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17297-1010/.minikube/bin
	I0925 04:23:41.377410    5021 out.go:303] Setting JSON to false
	I0925 04:23:41.392952    5021 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3196,"bootTime":1695637825,"procs":411,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.6","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0925 04:23:41.393052    5021 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0925 04:23:41.396808    5021 out.go:177] * [kubernetes-upgrade-766000] minikube v1.31.2 on Darwin 13.6 (arm64)
	I0925 04:23:41.403831    5021 out.go:177]   - MINIKUBE_LOCATION=17297
	I0925 04:23:41.407766    5021 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17297-1010/kubeconfig
	I0925 04:23:41.403918    5021 notify.go:220] Checking for updates...
	I0925 04:23:41.413784    5021 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0925 04:23:41.421787    5021 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 04:23:41.424852    5021 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17297-1010/.minikube
	I0925 04:23:41.427820    5021 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0925 04:23:41.431152    5021 config.go:182] Loaded profile config "cert-expiration-627000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0925 04:23:41.431213    5021 config.go:182] Loaded profile config "multinode-352000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0925 04:23:41.431256    5021 driver.go:373] Setting default libvirt URI to qemu:///system
	I0925 04:23:41.435780    5021 out.go:177] * Using the qemu2 driver based on user configuration
	I0925 04:23:41.442810    5021 start.go:298] selected driver: qemu2
	I0925 04:23:41.442815    5021 start.go:902] validating driver "qemu2" against <nil>
	I0925 04:23:41.442821    5021 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0925 04:23:41.444944    5021 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0925 04:23:41.447789    5021 out.go:177] * Automatically selected the socket_vmnet network
	I0925 04:23:41.450813    5021 start_flags.go:904] Wait components to verify : map[apiserver:true system_pods:true]
	I0925 04:23:41.450831    5021 cni.go:84] Creating CNI manager for ""
	I0925 04:23:41.450837    5021 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0925 04:23:41.450843    5021 start_flags.go:321] config:
	{Name:kubernetes-upgrade-766000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-766000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0925 04:23:41.455188    5021 iso.go:125] acquiring lock: {Name:mkf881a60cf9fd1672567914305ff6f7a4f13809 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 04:23:41.461799    5021 out.go:177] * Starting control plane node kubernetes-upgrade-766000 in cluster kubernetes-upgrade-766000
	I0925 04:23:41.465781    5021 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0925 04:23:41.465802    5021 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0925 04:23:41.465817    5021 cache.go:57] Caching tarball of preloaded images
	I0925 04:23:41.465881    5021 preload.go:174] Found /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0925 04:23:41.465887    5021 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0925 04:23:41.465960    5021 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/kubernetes-upgrade-766000/config.json ...
	I0925 04:23:41.465974    5021 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/kubernetes-upgrade-766000/config.json: {Name:mkef8ff151bd5a3d8b68f3e1689a41bab6564a68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 04:23:41.466170    5021 start.go:365] acquiring machines lock for kubernetes-upgrade-766000: {Name:mk918d99818e2cf21e5912bc291ed18d4b442ba6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 04:23:41.466203    5021 start.go:369] acquired machines lock for "kubernetes-upgrade-766000" in 23.708µs
	I0925 04:23:41.466212    5021 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-766000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-766000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 04:23:41.466249    5021 start.go:125] createHost starting for "" (driver="qemu2")
	I0925 04:23:41.472867    5021 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0925 04:23:41.488705    5021 start.go:159] libmachine.API.Create for "kubernetes-upgrade-766000" (driver="qemu2")
	I0925 04:23:41.488737    5021 client.go:168] LocalClient.Create starting
	I0925 04:23:41.488787    5021 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/ca.pem
	I0925 04:23:41.488826    5021 main.go:141] libmachine: Decoding PEM data...
	I0925 04:23:41.488836    5021 main.go:141] libmachine: Parsing certificate...
	I0925 04:23:41.488872    5021 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/cert.pem
	I0925 04:23:41.488890    5021 main.go:141] libmachine: Decoding PEM data...
	I0925 04:23:41.488898    5021 main.go:141] libmachine: Parsing certificate...
	I0925 04:23:41.489242    5021 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17297-1010/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I0925 04:23:41.610695    5021 main.go:141] libmachine: Creating SSH key...
	I0925 04:23:41.704934    5021 main.go:141] libmachine: Creating Disk image...
	I0925 04:23:41.704943    5021 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0925 04:23:41.705087    5021 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/kubernetes-upgrade-766000/disk.qcow2.raw /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/kubernetes-upgrade-766000/disk.qcow2
	I0925 04:23:41.713642    5021 main.go:141] libmachine: STDOUT: 
	I0925 04:23:41.713659    5021 main.go:141] libmachine: STDERR: 
	I0925 04:23:41.713719    5021 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/kubernetes-upgrade-766000/disk.qcow2 +20000M
	I0925 04:23:41.721112    5021 main.go:141] libmachine: STDOUT: Image resized.
	
	I0925 04:23:41.721124    5021 main.go:141] libmachine: STDERR: 
	I0925 04:23:41.721143    5021 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/kubernetes-upgrade-766000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/kubernetes-upgrade-766000/disk.qcow2
	I0925 04:23:41.721151    5021 main.go:141] libmachine: Starting QEMU VM...
	I0925 04:23:41.721195    5021 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/kubernetes-upgrade-766000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/kubernetes-upgrade-766000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/kubernetes-upgrade-766000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:a4:77:1c:ca:7d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/kubernetes-upgrade-766000/disk.qcow2
	I0925 04:23:41.722778    5021 main.go:141] libmachine: STDOUT: 
	I0925 04:23:41.722791    5021 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 04:23:41.722810    5021 client.go:171] LocalClient.Create took 234.068458ms
	I0925 04:23:43.725027    5021 start.go:128] duration metric: createHost completed in 2.258742375s
	I0925 04:23:43.725116    5021 start.go:83] releasing machines lock for "kubernetes-upgrade-766000", held for 2.258901667s
	W0925 04:23:43.725189    5021 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 04:23:43.731635    5021 out.go:177] * Deleting "kubernetes-upgrade-766000" in qemu2 ...
	W0925 04:23:43.756037    5021 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 04:23:43.756061    5021 start.go:703] Will try again in 5 seconds ...
	I0925 04:23:48.758268    5021 start.go:365] acquiring machines lock for kubernetes-upgrade-766000: {Name:mk918d99818e2cf21e5912bc291ed18d4b442ba6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 04:23:48.758837    5021 start.go:369] acquired machines lock for "kubernetes-upgrade-766000" in 478.333µs
	I0925 04:23:48.758959    5021 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-766000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-766000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 04:23:48.759198    5021 start.go:125] createHost starting for "" (driver="qemu2")
	I0925 04:23:48.768948    5021 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0925 04:23:48.818575    5021 start.go:159] libmachine.API.Create for "kubernetes-upgrade-766000" (driver="qemu2")
	I0925 04:23:48.818618    5021 client.go:168] LocalClient.Create starting
	I0925 04:23:48.818735    5021 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/ca.pem
	I0925 04:23:48.818792    5021 main.go:141] libmachine: Decoding PEM data...
	I0925 04:23:48.818812    5021 main.go:141] libmachine: Parsing certificate...
	I0925 04:23:48.818890    5021 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/cert.pem
	I0925 04:23:48.818930    5021 main.go:141] libmachine: Decoding PEM data...
	I0925 04:23:48.818946    5021 main.go:141] libmachine: Parsing certificate...
	I0925 04:23:48.819562    5021 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17297-1010/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I0925 04:23:48.950652    5021 main.go:141] libmachine: Creating SSH key...
	I0925 04:23:49.094851    5021 main.go:141] libmachine: Creating Disk image...
	I0925 04:23:49.094858    5021 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0925 04:23:49.095019    5021 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/kubernetes-upgrade-766000/disk.qcow2.raw /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/kubernetes-upgrade-766000/disk.qcow2
	I0925 04:23:49.104080    5021 main.go:141] libmachine: STDOUT: 
	I0925 04:23:49.104096    5021 main.go:141] libmachine: STDERR: 
	I0925 04:23:49.104153    5021 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/kubernetes-upgrade-766000/disk.qcow2 +20000M
	I0925 04:23:49.111520    5021 main.go:141] libmachine: STDOUT: Image resized.
	
	I0925 04:23:49.111537    5021 main.go:141] libmachine: STDERR: 
	I0925 04:23:49.111556    5021 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/kubernetes-upgrade-766000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/kubernetes-upgrade-766000/disk.qcow2
	I0925 04:23:49.111562    5021 main.go:141] libmachine: Starting QEMU VM...
	I0925 04:23:49.111602    5021 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/kubernetes-upgrade-766000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/kubernetes-upgrade-766000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/kubernetes-upgrade-766000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:00:39:2f:08:ce -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/kubernetes-upgrade-766000/disk.qcow2
	I0925 04:23:49.113183    5021 main.go:141] libmachine: STDOUT: 
	I0925 04:23:49.113194    5021 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 04:23:49.113205    5021 client.go:171] LocalClient.Create took 294.577917ms
	I0925 04:23:51.115376    5021 start.go:128] duration metric: createHost completed in 2.35615175s
	I0925 04:23:51.115487    5021 start.go:83] releasing machines lock for "kubernetes-upgrade-766000", held for 2.356618791s
	W0925 04:23:51.116001    5021 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-766000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-766000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 04:23:51.126633    5021 out.go:177] 
	W0925 04:23:51.129548    5021 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0925 04:23:51.129573    5021 out.go:239] * 
	* 
	W0925 04:23:51.132140    5021 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0925 04:23:51.141572    5021 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:237: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-766000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:240: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-766000
version_upgrade_test.go:245: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-766000 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-766000 status --format={{.Host}}: exit status 7 (35.222334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-766000 --memory=2200 --kubernetes-version=v1.28.2 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-766000 --memory=2200 --kubernetes-version=v1.28.2 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.179401708s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-766000] minikube v1.31.2 on Darwin 13.6 (arm64)
	  - MINIKUBE_LOCATION=17297
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17297-1010/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17297-1010/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node kubernetes-upgrade-766000 in cluster kubernetes-upgrade-766000
	* Restarting existing qemu2 VM for "kubernetes-upgrade-766000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-766000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 04:23:51.316357    5039 out.go:296] Setting OutFile to fd 1 ...
	I0925 04:23:51.316500    5039 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 04:23:51.316504    5039 out.go:309] Setting ErrFile to fd 2...
	I0925 04:23:51.316507    5039 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 04:23:51.316629    5039 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17297-1010/.minikube/bin
	I0925 04:23:51.317666    5039 out.go:303] Setting JSON to false
	I0925 04:23:51.332827    5039 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3206,"bootTime":1695637825,"procs":409,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.6","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0925 04:23:51.332903    5039 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0925 04:23:51.337478    5039 out.go:177] * [kubernetes-upgrade-766000] minikube v1.31.2 on Darwin 13.6 (arm64)
	I0925 04:23:51.344422    5039 out.go:177]   - MINIKUBE_LOCATION=17297
	I0925 04:23:51.348468    5039 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17297-1010/kubeconfig
	I0925 04:23:51.344496    5039 notify.go:220] Checking for updates...
	I0925 04:23:51.354360    5039 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0925 04:23:51.358403    5039 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 04:23:51.361318    5039 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17297-1010/.minikube
	I0925 04:23:51.364387    5039 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0925 04:23:51.368753    5039 config.go:182] Loaded profile config "kubernetes-upgrade-766000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0925 04:23:51.368996    5039 driver.go:373] Setting default libvirt URI to qemu:///system
	I0925 04:23:51.373377    5039 out.go:177] * Using the qemu2 driver based on existing profile
	I0925 04:23:51.380435    5039 start.go:298] selected driver: qemu2
	I0925 04:23:51.380442    5039 start.go:902] validating driver "qemu2" against &{Name:kubernetes-upgrade-766000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubern
etesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-766000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0925 04:23:51.380501    5039 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0925 04:23:51.382583    5039 cni.go:84] Creating CNI manager for ""
	I0925 04:23:51.382599    5039 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0925 04:23:51.382604    5039 start_flags.go:321] config:
	{Name:kubernetes-upgrade-766000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:kubernetes-upgrade-766000 Nam
espace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFi
rmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0925 04:23:51.386794    5039 iso.go:125] acquiring lock: {Name:mkf881a60cf9fd1672567914305ff6f7a4f13809 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 04:23:51.394441    5039 out.go:177] * Starting control plane node kubernetes-upgrade-766000 in cluster kubernetes-upgrade-766000
	I0925 04:23:51.398340    5039 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0925 04:23:51.398359    5039 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I0925 04:23:51.398375    5039 cache.go:57] Caching tarball of preloaded images
	I0925 04:23:51.398439    5039 preload.go:174] Found /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0925 04:23:51.398445    5039 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I0925 04:23:51.398500    5039 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/kubernetes-upgrade-766000/config.json ...
	I0925 04:23:51.398855    5039 start.go:365] acquiring machines lock for kubernetes-upgrade-766000: {Name:mk918d99818e2cf21e5912bc291ed18d4b442ba6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 04:23:51.398880    5039 start.go:369] acquired machines lock for "kubernetes-upgrade-766000" in 19.416µs
	I0925 04:23:51.398890    5039 start.go:96] Skipping create...Using existing machine configuration
	I0925 04:23:51.398894    5039 fix.go:54] fixHost starting: 
	I0925 04:23:51.399002    5039 fix.go:102] recreateIfNeeded on kubernetes-upgrade-766000: state=Stopped err=<nil>
	W0925 04:23:51.399012    5039 fix.go:128] unexpected machine state, will restart: <nil>
	I0925 04:23:51.405392    5039 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-766000" ...
	I0925 04:23:51.409400    5039 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/kubernetes-upgrade-766000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/kubernetes-upgrade-766000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/kubernetes-upgrade-766000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:00:39:2f:08:ce -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/kubernetes-upgrade-766000/disk.qcow2
	I0925 04:23:51.411179    5039 main.go:141] libmachine: STDOUT: 
	I0925 04:23:51.411197    5039 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 04:23:51.411227    5039 fix.go:56] fixHost completed within 12.330917ms
	I0925 04:23:51.411232    5039 start.go:83] releasing machines lock for "kubernetes-upgrade-766000", held for 12.347834ms
	W0925 04:23:51.411237    5039 start.go:688] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0925 04:23:51.411280    5039 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 04:23:51.411284    5039 start.go:703] Will try again in 5 seconds ...
	I0925 04:23:56.413529    5039 start.go:365] acquiring machines lock for kubernetes-upgrade-766000: {Name:mk918d99818e2cf21e5912bc291ed18d4b442ba6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 04:23:56.413850    5039 start.go:369] acquired machines lock for "kubernetes-upgrade-766000" in 244.834µs
	I0925 04:23:56.413985    5039 start.go:96] Skipping create...Using existing machine configuration
	I0925 04:23:56.414007    5039 fix.go:54] fixHost starting: 
	I0925 04:23:56.414759    5039 fix.go:102] recreateIfNeeded on kubernetes-upgrade-766000: state=Stopped err=<nil>
	W0925 04:23:56.414785    5039 fix.go:128] unexpected machine state, will restart: <nil>
	I0925 04:23:56.422159    5039 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-766000" ...
	I0925 04:23:56.426338    5039 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/kubernetes-upgrade-766000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/kubernetes-upgrade-766000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/kubernetes-upgrade-766000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:00:39:2f:08:ce -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/kubernetes-upgrade-766000/disk.qcow2
	I0925 04:23:56.435062    5039 main.go:141] libmachine: STDOUT: 
	I0925 04:23:56.435126    5039 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 04:23:56.435228    5039 fix.go:56] fixHost completed within 21.209541ms
	I0925 04:23:56.435252    5039 start.go:83] releasing machines lock for "kubernetes-upgrade-766000", held for 21.380167ms
	W0925 04:23:56.435414    5039 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-766000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-766000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 04:23:56.442182    5039 out.go:177] 
	W0925 04:23:56.446115    5039 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0925 04:23:56.446139    5039 out.go:239] * 
	* 
	W0925 04:23:56.448845    5039 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0925 04:23:56.458125    5039 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:258: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-766000 --memory=2200 --kubernetes-version=v1.28.2 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-766000 version --output=json
version_upgrade_test.go:261: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-766000 version --output=json: exit status 1 (63.944ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-766000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:263: error running kubectl: exit status 1
panic.go:523: *** TestKubernetesUpgrade FAILED at 2023-09-25 04:23:56.5358 -0700 PDT m=+3036.411440793
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-766000 -n kubernetes-upgrade-766000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-766000 -n kubernetes-upgrade-766000: exit status 7 (33.164375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-766000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-766000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-766000
--- FAIL: TestKubernetesUpgrade (15.32s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.42s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.31.2 on darwin (arm64)
- MINIKUBE_LOCATION=17297
- KUBECONFIG=/Users/jenkins/minikube-integration/17297-1010/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3976403734/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.42s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.11s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.31.2 on darwin (arm64)
- MINIKUBE_LOCATION=17297
- KUBECONFIG=/Users/jenkins/minikube-integration/17297-1010/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2644312834/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.11s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (171.29s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
E0925 04:24:18.710850    1469 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/functional-742000/client.crt: no such file or directory
version_upgrade_test.go:168: v1.6.2 release installation failed: bad response code: 404
--- FAIL: TestStoppedBinaryUpgrade/Setup (171.29s)

                                                
                                    
x
+
TestPause/serial/Start (9.85s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-177000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-177000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.7791s)

                                                
                                                
-- stdout --
	* [pause-177000] minikube v1.31.2 on Darwin 13.6 (arm64)
	  - MINIKUBE_LOCATION=17297
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17297-1010/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17297-1010/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node pause-177000 in cluster pause-177000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-177000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-177000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-177000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-177000 -n pause-177000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-177000 -n pause-177000: exit status 7 (65.121667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-177000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (9.85s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.76s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-139000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-139000 --driver=qemu2 : exit status 80 (9.689319125s)

                                                
                                                
-- stdout --
	* [NoKubernetes-139000] minikube v1.31.2 on Darwin 13.6 (arm64)
	  - MINIKUBE_LOCATION=17297
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17297-1010/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17297-1010/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node NoKubernetes-139000 in cluster NoKubernetes-139000
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-139000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-139000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-139000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-139000 -n NoKubernetes-139000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-139000 -n NoKubernetes-139000: exit status 7 (70.105917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-139000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.76s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-139000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-139000 --no-kubernetes --driver=qemu2 : exit status 80 (5.238071625s)

                                                
                                                
-- stdout --
	* [NoKubernetes-139000] minikube v1.31.2 on Darwin 13.6 (arm64)
	  - MINIKUBE_LOCATION=17297
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17297-1010/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17297-1010/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-139000
	* Restarting existing qemu2 VM for "NoKubernetes-139000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-139000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-139000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-139000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-139000 -n NoKubernetes-139000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-139000 -n NoKubernetes-139000: exit status 7 (68.147791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-139000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-139000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-139000 --no-kubernetes --driver=qemu2 : exit status 80 (5.236560208s)

                                                
                                                
-- stdout --
	* [NoKubernetes-139000] minikube v1.31.2 on Darwin 13.6 (arm64)
	  - MINIKUBE_LOCATION=17297
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17297-1010/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17297-1010/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-139000
	* Restarting existing qemu2 VM for "NoKubernetes-139000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-139000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-139000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-139000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-139000 -n NoKubernetes-139000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-139000 -n NoKubernetes-139000: exit status 7 (64.730833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-139000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-139000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-139000 --driver=qemu2 : exit status 80 (5.231711084s)

                                                
                                                
-- stdout --
	* [NoKubernetes-139000] minikube v1.31.2 on Darwin 13.6 (arm64)
	  - MINIKUBE_LOCATION=17297
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17297-1010/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17297-1010/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-139000
	* Restarting existing qemu2 VM for "NoKubernetes-139000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-139000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-139000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-139000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-139000 -n NoKubernetes-139000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-139000 -n NoKubernetes-139000: exit status 7 (64.812917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-139000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-570000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-570000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.717866916s)

                                                
                                                
-- stdout --
	* [auto-570000] minikube v1.31.2 on Darwin 13.6 (arm64)
	  - MINIKUBE_LOCATION=17297
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17297-1010/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17297-1010/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node auto-570000 in cluster auto-570000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-570000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 04:25:11.319588    5179 out.go:296] Setting OutFile to fd 1 ...
	I0925 04:25:11.319744    5179 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 04:25:11.319747    5179 out.go:309] Setting ErrFile to fd 2...
	I0925 04:25:11.319749    5179 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 04:25:11.319896    5179 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17297-1010/.minikube/bin
	I0925 04:25:11.320944    5179 out.go:303] Setting JSON to false
	I0925 04:25:11.336280    5179 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3286,"bootTime":1695637825,"procs":408,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.6","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0925 04:25:11.336368    5179 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0925 04:25:11.341551    5179 out.go:177] * [auto-570000] minikube v1.31.2 on Darwin 13.6 (arm64)
	I0925 04:25:11.349460    5179 out.go:177]   - MINIKUBE_LOCATION=17297
	I0925 04:25:11.353494    5179 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17297-1010/kubeconfig
	I0925 04:25:11.349516    5179 notify.go:220] Checking for updates...
	I0925 04:25:11.359431    5179 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0925 04:25:11.362484    5179 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 04:25:11.363957    5179 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17297-1010/.minikube
	I0925 04:25:11.367495    5179 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0925 04:25:11.370863    5179 config.go:182] Loaded profile config "multinode-352000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0925 04:25:11.370905    5179 driver.go:373] Setting default libvirt URI to qemu:///system
	I0925 04:25:11.374295    5179 out.go:177] * Using the qemu2 driver based on user configuration
	I0925 04:25:11.381442    5179 start.go:298] selected driver: qemu2
	I0925 04:25:11.381451    5179 start.go:902] validating driver "qemu2" against <nil>
	I0925 04:25:11.381458    5179 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0925 04:25:11.383543    5179 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0925 04:25:11.386532    5179 out.go:177] * Automatically selected the socket_vmnet network
	I0925 04:25:11.389560    5179 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0925 04:25:11.389587    5179 cni.go:84] Creating CNI manager for ""
	I0925 04:25:11.389595    5179 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0925 04:25:11.389599    5179 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0925 04:25:11.389603    5179 start_flags.go:321] config:
	{Name:auto-570000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:auto-570000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISoc
ket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0
AutoPauseInterval:1m0s}
	I0925 04:25:11.393635    5179 iso.go:125] acquiring lock: {Name:mkf881a60cf9fd1672567914305ff6f7a4f13809 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 04:25:11.399422    5179 out.go:177] * Starting control plane node auto-570000 in cluster auto-570000
	I0925 04:25:11.403433    5179 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0925 04:25:11.403453    5179 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I0925 04:25:11.403470    5179 cache.go:57] Caching tarball of preloaded images
	I0925 04:25:11.403540    5179 preload.go:174] Found /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0925 04:25:11.403545    5179 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I0925 04:25:11.403605    5179 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/auto-570000/config.json ...
	I0925 04:25:11.403623    5179 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/auto-570000/config.json: {Name:mk21495e8706180abac9a95a4ed9402fe93022a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 04:25:11.403824    5179 start.go:365] acquiring machines lock for auto-570000: {Name:mk918d99818e2cf21e5912bc291ed18d4b442ba6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 04:25:11.403853    5179 start.go:369] acquired machines lock for "auto-570000" in 23.5µs
	I0925 04:25:11.403862    5179 start.go:93] Provisioning new machine with config: &{Name:auto-570000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.28.2 ClusterName:auto-570000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 04:25:11.403887    5179 start.go:125] createHost starting for "" (driver="qemu2")
	I0925 04:25:11.412403    5179 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0925 04:25:11.427737    5179 start.go:159] libmachine.API.Create for "auto-570000" (driver="qemu2")
	I0925 04:25:11.427766    5179 client.go:168] LocalClient.Create starting
	I0925 04:25:11.427818    5179 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/ca.pem
	I0925 04:25:11.427841    5179 main.go:141] libmachine: Decoding PEM data...
	I0925 04:25:11.427854    5179 main.go:141] libmachine: Parsing certificate...
	I0925 04:25:11.427892    5179 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/cert.pem
	I0925 04:25:11.427911    5179 main.go:141] libmachine: Decoding PEM data...
	I0925 04:25:11.427918    5179 main.go:141] libmachine: Parsing certificate...
	I0925 04:25:11.428240    5179 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17297-1010/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I0925 04:25:11.546968    5179 main.go:141] libmachine: Creating SSH key...
	I0925 04:25:11.594177    5179 main.go:141] libmachine: Creating Disk image...
	I0925 04:25:11.594182    5179 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0925 04:25:11.594322    5179 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/auto-570000/disk.qcow2.raw /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/auto-570000/disk.qcow2
	I0925 04:25:11.602848    5179 main.go:141] libmachine: STDOUT: 
	I0925 04:25:11.602867    5179 main.go:141] libmachine: STDERR: 
	I0925 04:25:11.602926    5179 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/auto-570000/disk.qcow2 +20000M
	I0925 04:25:11.610088    5179 main.go:141] libmachine: STDOUT: Image resized.
	
	I0925 04:25:11.610099    5179 main.go:141] libmachine: STDERR: 
	I0925 04:25:11.610112    5179 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/auto-570000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/auto-570000/disk.qcow2
	I0925 04:25:11.610116    5179 main.go:141] libmachine: Starting QEMU VM...
	I0925 04:25:11.610147    5179 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/auto-570000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/auto-570000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/auto-570000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:df:8e:ff:f4:9c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/auto-570000/disk.qcow2
	I0925 04:25:11.611696    5179 main.go:141] libmachine: STDOUT: 
	I0925 04:25:11.611716    5179 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 04:25:11.611735    5179 client.go:171] LocalClient.Create took 183.964209ms
	I0925 04:25:13.613940    5179 start.go:128] duration metric: createHost completed in 2.210022334s
	I0925 04:25:13.614038    5179 start.go:83] releasing machines lock for "auto-570000", held for 2.210173792s
	W0925 04:25:13.614150    5179 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 04:25:13.624345    5179 out.go:177] * Deleting "auto-570000" in qemu2 ...
	W0925 04:25:13.644379    5179 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 04:25:13.644403    5179 start.go:703] Will try again in 5 seconds ...
	I0925 04:25:18.646679    5179 start.go:365] acquiring machines lock for auto-570000: {Name:mk918d99818e2cf21e5912bc291ed18d4b442ba6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 04:25:18.647215    5179 start.go:369] acquired machines lock for "auto-570000" in 409.125µs
	I0925 04:25:18.647360    5179 start.go:93] Provisioning new machine with config: &{Name:auto-570000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.28.2 ClusterName:auto-570000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 04:25:18.647672    5179 start.go:125] createHost starting for "" (driver="qemu2")
	I0925 04:25:18.658190    5179 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0925 04:25:18.705732    5179 start.go:159] libmachine.API.Create for "auto-570000" (driver="qemu2")
	I0925 04:25:18.705774    5179 client.go:168] LocalClient.Create starting
	I0925 04:25:18.705901    5179 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/ca.pem
	I0925 04:25:18.705966    5179 main.go:141] libmachine: Decoding PEM data...
	I0925 04:25:18.705986    5179 main.go:141] libmachine: Parsing certificate...
	I0925 04:25:18.706051    5179 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/cert.pem
	I0925 04:25:18.706085    5179 main.go:141] libmachine: Decoding PEM data...
	I0925 04:25:18.706101    5179 main.go:141] libmachine: Parsing certificate...
	I0925 04:25:18.706560    5179 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17297-1010/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I0925 04:25:18.832301    5179 main.go:141] libmachine: Creating SSH key...
	I0925 04:25:18.949152    5179 main.go:141] libmachine: Creating Disk image...
	I0925 04:25:18.949159    5179 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0925 04:25:18.949303    5179 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/auto-570000/disk.qcow2.raw /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/auto-570000/disk.qcow2
	I0925 04:25:18.957785    5179 main.go:141] libmachine: STDOUT: 
	I0925 04:25:18.957804    5179 main.go:141] libmachine: STDERR: 
	I0925 04:25:18.957858    5179 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/auto-570000/disk.qcow2 +20000M
	I0925 04:25:18.965199    5179 main.go:141] libmachine: STDOUT: Image resized.
	
	I0925 04:25:18.965222    5179 main.go:141] libmachine: STDERR: 
	I0925 04:25:18.965241    5179 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/auto-570000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/auto-570000/disk.qcow2
	I0925 04:25:18.965249    5179 main.go:141] libmachine: Starting QEMU VM...
	I0925 04:25:18.965290    5179 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/auto-570000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/auto-570000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/auto-570000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:95:92:4d:19:e9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/auto-570000/disk.qcow2
	I0925 04:25:18.966925    5179 main.go:141] libmachine: STDOUT: 
	I0925 04:25:18.966937    5179 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 04:25:18.966950    5179 client.go:171] LocalClient.Create took 261.170709ms
	I0925 04:25:20.969127    5179 start.go:128] duration metric: createHost completed in 2.321423334s
	I0925 04:25:20.969199    5179 start.go:83] releasing machines lock for "auto-570000", held for 2.321959667s
	W0925 04:25:20.969652    5179 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p auto-570000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-570000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 04:25:20.981194    5179 out.go:177] 
	W0925 04:25:20.985403    5179 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0925 04:25:20.985457    5179 out.go:239] * 
	* 
	W0925 04:25:20.988454    5179 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0925 04:25:20.998309    5179 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-570000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
E0925 04:25:26.841663    1469 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/client.crt: no such file or directory
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-570000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.734827542s)

                                                
                                                
-- stdout --
	* [calico-570000] minikube v1.31.2 on Darwin 13.6 (arm64)
	  - MINIKUBE_LOCATION=17297
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17297-1010/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17297-1010/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node calico-570000 in cluster calico-570000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-570000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 04:25:23.054317    5292 out.go:296] Setting OutFile to fd 1 ...
	I0925 04:25:23.054437    5292 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 04:25:23.054439    5292 out.go:309] Setting ErrFile to fd 2...
	I0925 04:25:23.054442    5292 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 04:25:23.054578    5292 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17297-1010/.minikube/bin
	I0925 04:25:23.055617    5292 out.go:303] Setting JSON to false
	I0925 04:25:23.070785    5292 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3298,"bootTime":1695637825,"procs":411,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.6","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0925 04:25:23.070865    5292 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0925 04:25:23.076098    5292 out.go:177] * [calico-570000] minikube v1.31.2 on Darwin 13.6 (arm64)
	I0925 04:25:23.083081    5292 out.go:177]   - MINIKUBE_LOCATION=17297
	I0925 04:25:23.083151    5292 notify.go:220] Checking for updates...
	I0925 04:25:23.090952    5292 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17297-1010/kubeconfig
	I0925 04:25:23.094103    5292 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0925 04:25:23.096995    5292 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 04:25:23.099974    5292 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17297-1010/.minikube
	I0925 04:25:23.102998    5292 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0925 04:25:23.106424    5292 config.go:182] Loaded profile config "multinode-352000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0925 04:25:23.106470    5292 driver.go:373] Setting default libvirt URI to qemu:///system
	I0925 04:25:23.110958    5292 out.go:177] * Using the qemu2 driver based on user configuration
	I0925 04:25:23.118024    5292 start.go:298] selected driver: qemu2
	I0925 04:25:23.118031    5292 start.go:902] validating driver "qemu2" against <nil>
	I0925 04:25:23.118038    5292 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0925 04:25:23.120217    5292 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0925 04:25:23.122955    5292 out.go:177] * Automatically selected the socket_vmnet network
	I0925 04:25:23.126053    5292 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0925 04:25:23.126074    5292 cni.go:84] Creating CNI manager for "calico"
	I0925 04:25:23.126078    5292 start_flags.go:316] Found "Calico" CNI - setting NetworkPlugin=cni
	I0925 04:25:23.126084    5292 start_flags.go:321] config:
	{Name:calico-570000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:calico-570000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHA
gentPID:0 AutoPauseInterval:1m0s}
	I0925 04:25:23.130310    5292 iso.go:125] acquiring lock: {Name:mkf881a60cf9fd1672567914305ff6f7a4f13809 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 04:25:23.136018    5292 out.go:177] * Starting control plane node calico-570000 in cluster calico-570000
	I0925 04:25:23.140038    5292 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0925 04:25:23.140059    5292 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I0925 04:25:23.140076    5292 cache.go:57] Caching tarball of preloaded images
	I0925 04:25:23.140146    5292 preload.go:174] Found /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0925 04:25:23.140152    5292 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I0925 04:25:23.140228    5292 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/calico-570000/config.json ...
	I0925 04:25:23.140241    5292 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/calico-570000/config.json: {Name:mkd90483ce843fb4938649d7414c12fbc764fa6a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 04:25:23.140447    5292 start.go:365] acquiring machines lock for calico-570000: {Name:mk918d99818e2cf21e5912bc291ed18d4b442ba6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 04:25:23.140486    5292 start.go:369] acquired machines lock for "calico-570000" in 32.792µs
	I0925 04:25:23.140496    5292 start.go:93] Provisioning new machine with config: &{Name:calico-570000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.2 ClusterName:calico-570000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 04:25:23.140527    5292 start.go:125] createHost starting for "" (driver="qemu2")
	I0925 04:25:23.148989    5292 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0925 04:25:23.165447    5292 start.go:159] libmachine.API.Create for "calico-570000" (driver="qemu2")
	I0925 04:25:23.165475    5292 client.go:168] LocalClient.Create starting
	I0925 04:25:23.165551    5292 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/ca.pem
	I0925 04:25:23.165575    5292 main.go:141] libmachine: Decoding PEM data...
	I0925 04:25:23.165586    5292 main.go:141] libmachine: Parsing certificate...
	I0925 04:25:23.165624    5292 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/cert.pem
	I0925 04:25:23.165643    5292 main.go:141] libmachine: Decoding PEM data...
	I0925 04:25:23.165651    5292 main.go:141] libmachine: Parsing certificate...
	I0925 04:25:23.165988    5292 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17297-1010/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I0925 04:25:23.281625    5292 main.go:141] libmachine: Creating SSH key...
	I0925 04:25:23.404772    5292 main.go:141] libmachine: Creating Disk image...
	I0925 04:25:23.404782    5292 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0925 04:25:23.404915    5292 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/calico-570000/disk.qcow2.raw /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/calico-570000/disk.qcow2
	I0925 04:25:23.413400    5292 main.go:141] libmachine: STDOUT: 
	I0925 04:25:23.413415    5292 main.go:141] libmachine: STDERR: 
	I0925 04:25:23.413468    5292 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/calico-570000/disk.qcow2 +20000M
	I0925 04:25:23.420693    5292 main.go:141] libmachine: STDOUT: Image resized.
	
	I0925 04:25:23.420706    5292 main.go:141] libmachine: STDERR: 
	I0925 04:25:23.420726    5292 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/calico-570000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/calico-570000/disk.qcow2
	I0925 04:25:23.420733    5292 main.go:141] libmachine: Starting QEMU VM...
	I0925 04:25:23.420775    5292 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/calico-570000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/calico-570000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/calico-570000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:4b:48:f1:89:c1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/calico-570000/disk.qcow2
	I0925 04:25:23.422323    5292 main.go:141] libmachine: STDOUT: 
	I0925 04:25:23.422338    5292 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 04:25:23.422355    5292 client.go:171] LocalClient.Create took 256.8765ms
	I0925 04:25:25.424573    5292 start.go:128] duration metric: createHost completed in 2.284006334s
	I0925 04:25:25.424647    5292 start.go:83] releasing machines lock for "calico-570000", held for 2.284149458s
	W0925 04:25:25.424698    5292 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 04:25:25.436138    5292 out.go:177] * Deleting "calico-570000" in qemu2 ...
	W0925 04:25:25.456550    5292 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 04:25:25.456576    5292 start.go:703] Will try again in 5 seconds ...
	I0925 04:25:30.458869    5292 start.go:365] acquiring machines lock for calico-570000: {Name:mk918d99818e2cf21e5912bc291ed18d4b442ba6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 04:25:30.459325    5292 start.go:369] acquired machines lock for "calico-570000" in 362.375µs
	I0925 04:25:30.459458    5292 start.go:93] Provisioning new machine with config: &{Name:calico-570000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.2 ClusterName:calico-570000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 04:25:30.459716    5292 start.go:125] createHost starting for "" (driver="qemu2")
	I0925 04:25:30.469328    5292 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0925 04:25:30.517229    5292 start.go:159] libmachine.API.Create for "calico-570000" (driver="qemu2")
	I0925 04:25:30.517306    5292 client.go:168] LocalClient.Create starting
	I0925 04:25:30.517475    5292 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/ca.pem
	I0925 04:25:30.517525    5292 main.go:141] libmachine: Decoding PEM data...
	I0925 04:25:30.517549    5292 main.go:141] libmachine: Parsing certificate...
	I0925 04:25:30.517623    5292 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/cert.pem
	I0925 04:25:30.517663    5292 main.go:141] libmachine: Decoding PEM data...
	I0925 04:25:30.517677    5292 main.go:141] libmachine: Parsing certificate...
	I0925 04:25:30.518286    5292 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17297-1010/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I0925 04:25:30.649586    5292 main.go:141] libmachine: Creating SSH key...
	I0925 04:25:30.704466    5292 main.go:141] libmachine: Creating Disk image...
	I0925 04:25:30.704471    5292 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0925 04:25:30.704614    5292 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/calico-570000/disk.qcow2.raw /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/calico-570000/disk.qcow2
	I0925 04:25:30.713371    5292 main.go:141] libmachine: STDOUT: 
	I0925 04:25:30.713383    5292 main.go:141] libmachine: STDERR: 
	I0925 04:25:30.713460    5292 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/calico-570000/disk.qcow2 +20000M
	I0925 04:25:30.720666    5292 main.go:141] libmachine: STDOUT: Image resized.
	
	I0925 04:25:30.720681    5292 main.go:141] libmachine: STDERR: 
	I0925 04:25:30.720696    5292 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/calico-570000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/calico-570000/disk.qcow2
	I0925 04:25:30.720701    5292 main.go:141] libmachine: Starting QEMU VM...
	I0925 04:25:30.720745    5292 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/calico-570000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/calico-570000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/calico-570000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:64:5d:4e:30:77 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/calico-570000/disk.qcow2
	I0925 04:25:30.722341    5292 main.go:141] libmachine: STDOUT: 
	I0925 04:25:30.722367    5292 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 04:25:30.722382    5292 client.go:171] LocalClient.Create took 205.068958ms
	I0925 04:25:32.724577    5292 start.go:128] duration metric: createHost completed in 2.2648025s
	I0925 04:25:32.724656    5292 start.go:83] releasing machines lock for "calico-570000", held for 2.265307458s
	W0925 04:25:32.725133    5292 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p calico-570000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-570000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 04:25:32.733523    5292 out.go:177] 
	W0925 04:25:32.737692    5292 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0925 04:25:32.737719    5292 out.go:239] * 
	* 
	W0925 04:25:32.740449    5292 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0925 04:25:32.749419    5292 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-570000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
E0925 04:25:38.962968    1469 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/ingress-addon-legacy-907000/client.crt: no such file or directory
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-570000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.766522458s)

                                                
                                                
-- stdout --
	* [custom-flannel-570000] minikube v1.31.2 on Darwin 13.6 (arm64)
	  - MINIKUBE_LOCATION=17297
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17297-1010/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17297-1010/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node custom-flannel-570000 in cluster custom-flannel-570000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-570000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 04:25:35.002231    5416 out.go:296] Setting OutFile to fd 1 ...
	I0925 04:25:35.002364    5416 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 04:25:35.002368    5416 out.go:309] Setting ErrFile to fd 2...
	I0925 04:25:35.002370    5416 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 04:25:35.002506    5416 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17297-1010/.minikube/bin
	I0925 04:25:35.003537    5416 out.go:303] Setting JSON to false
	I0925 04:25:35.018908    5416 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3310,"bootTime":1695637825,"procs":415,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.6","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0925 04:25:35.018987    5416 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0925 04:25:35.024354    5416 out.go:177] * [custom-flannel-570000] minikube v1.31.2 on Darwin 13.6 (arm64)
	I0925 04:25:35.032360    5416 out.go:177]   - MINIKUBE_LOCATION=17297
	I0925 04:25:35.036305    5416 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17297-1010/kubeconfig
	I0925 04:25:35.032427    5416 notify.go:220] Checking for updates...
	I0925 04:25:35.042325    5416 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0925 04:25:35.045303    5416 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 04:25:35.048289    5416 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17297-1010/.minikube
	I0925 04:25:35.051333    5416 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0925 04:25:35.054556    5416 config.go:182] Loaded profile config "multinode-352000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0925 04:25:35.054594    5416 driver.go:373] Setting default libvirt URI to qemu:///system
	I0925 04:25:35.058291    5416 out.go:177] * Using the qemu2 driver based on user configuration
	I0925 04:25:35.065294    5416 start.go:298] selected driver: qemu2
	I0925 04:25:35.065302    5416 start.go:902] validating driver "qemu2" against <nil>
	I0925 04:25:35.065310    5416 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0925 04:25:35.067451    5416 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0925 04:25:35.070307    5416 out.go:177] * Automatically selected the socket_vmnet network
	I0925 04:25:35.073423    5416 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0925 04:25:35.073444    5416 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0925 04:25:35.073454    5416 start_flags.go:316] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0925 04:25:35.073460    5416 start_flags.go:321] config:
	{Name:custom-flannel-570000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:custom-flannel-570000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/sock
et_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0925 04:25:35.077909    5416 iso.go:125] acquiring lock: {Name:mkf881a60cf9fd1672567914305ff6f7a4f13809 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 04:25:35.085330    5416 out.go:177] * Starting control plane node custom-flannel-570000 in cluster custom-flannel-570000
	I0925 04:25:35.089199    5416 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0925 04:25:35.089218    5416 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I0925 04:25:35.089234    5416 cache.go:57] Caching tarball of preloaded images
	I0925 04:25:35.089293    5416 preload.go:174] Found /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0925 04:25:35.089299    5416 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I0925 04:25:35.089381    5416 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/custom-flannel-570000/config.json ...
	I0925 04:25:35.089394    5416 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/custom-flannel-570000/config.json: {Name:mkfc0b1557ae5809eb1bea96ec2bcfacceba40d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 04:25:35.089585    5416 start.go:365] acquiring machines lock for custom-flannel-570000: {Name:mk918d99818e2cf21e5912bc291ed18d4b442ba6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 04:25:35.089614    5416 start.go:369] acquired machines lock for "custom-flannel-570000" in 24µs
	I0925 04:25:35.089623    5416 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-570000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.28.2 ClusterName:custom-flannel-570000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 04:25:35.089648    5416 start.go:125] createHost starting for "" (driver="qemu2")
	I0925 04:25:35.097293    5416 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0925 04:25:35.112533    5416 start.go:159] libmachine.API.Create for "custom-flannel-570000" (driver="qemu2")
	I0925 04:25:35.112554    5416 client.go:168] LocalClient.Create starting
	I0925 04:25:35.112609    5416 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/ca.pem
	I0925 04:25:35.112635    5416 main.go:141] libmachine: Decoding PEM data...
	I0925 04:25:35.112645    5416 main.go:141] libmachine: Parsing certificate...
	I0925 04:25:35.112684    5416 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/cert.pem
	I0925 04:25:35.112703    5416 main.go:141] libmachine: Decoding PEM data...
	I0925 04:25:35.112710    5416 main.go:141] libmachine: Parsing certificate...
	I0925 04:25:35.113069    5416 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17297-1010/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I0925 04:25:35.229214    5416 main.go:141] libmachine: Creating SSH key...
	I0925 04:25:35.334178    5416 main.go:141] libmachine: Creating Disk image...
	I0925 04:25:35.334184    5416 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0925 04:25:35.334342    5416 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/custom-flannel-570000/disk.qcow2.raw /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/custom-flannel-570000/disk.qcow2
	I0925 04:25:35.342983    5416 main.go:141] libmachine: STDOUT: 
	I0925 04:25:35.342996    5416 main.go:141] libmachine: STDERR: 
	I0925 04:25:35.343055    5416 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/custom-flannel-570000/disk.qcow2 +20000M
	I0925 04:25:35.350257    5416 main.go:141] libmachine: STDOUT: Image resized.
	
	I0925 04:25:35.350278    5416 main.go:141] libmachine: STDERR: 
	I0925 04:25:35.350299    5416 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/custom-flannel-570000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/custom-flannel-570000/disk.qcow2
	I0925 04:25:35.350309    5416 main.go:141] libmachine: Starting QEMU VM...
	I0925 04:25:35.350352    5416 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/custom-flannel-570000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/custom-flannel-570000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/custom-flannel-570000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:b9:ee:b5:89:4d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/custom-flannel-570000/disk.qcow2
	I0925 04:25:35.351931    5416 main.go:141] libmachine: STDOUT: 
	I0925 04:25:35.351945    5416 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 04:25:35.351963    5416 client.go:171] LocalClient.Create took 239.405875ms
	I0925 04:25:37.354136    5416 start.go:128] duration metric: createHost completed in 2.264467s
	I0925 04:25:37.354205    5416 start.go:83] releasing machines lock for "custom-flannel-570000", held for 2.264579s
	W0925 04:25:37.354276    5416 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 04:25:37.363640    5416 out.go:177] * Deleting "custom-flannel-570000" in qemu2 ...
	W0925 04:25:37.388110    5416 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 04:25:37.388146    5416 start.go:703] Will try again in 5 seconds ...
	I0925 04:25:42.390421    5416 start.go:365] acquiring machines lock for custom-flannel-570000: {Name:mk918d99818e2cf21e5912bc291ed18d4b442ba6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 04:25:42.390847    5416 start.go:369] acquired machines lock for "custom-flannel-570000" in 329.333µs
	I0925 04:25:42.390979    5416 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-570000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.28.2 ClusterName:custom-flannel-570000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 04:25:42.391304    5416 start.go:125] createHost starting for "" (driver="qemu2")
	I0925 04:25:42.396111    5416 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0925 04:25:42.444596    5416 start.go:159] libmachine.API.Create for "custom-flannel-570000" (driver="qemu2")
	I0925 04:25:42.444640    5416 client.go:168] LocalClient.Create starting
	I0925 04:25:42.444768    5416 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/ca.pem
	I0925 04:25:42.444824    5416 main.go:141] libmachine: Decoding PEM data...
	I0925 04:25:42.444841    5416 main.go:141] libmachine: Parsing certificate...
	I0925 04:25:42.444903    5416 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/cert.pem
	I0925 04:25:42.444938    5416 main.go:141] libmachine: Decoding PEM data...
	I0925 04:25:42.444952    5416 main.go:141] libmachine: Parsing certificate...
	I0925 04:25:42.445468    5416 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17297-1010/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I0925 04:25:42.578175    5416 main.go:141] libmachine: Creating SSH key...
	I0925 04:25:42.683414    5416 main.go:141] libmachine: Creating Disk image...
	I0925 04:25:42.683420    5416 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0925 04:25:42.683556    5416 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/custom-flannel-570000/disk.qcow2.raw /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/custom-flannel-570000/disk.qcow2
	I0925 04:25:42.693047    5416 main.go:141] libmachine: STDOUT: 
	I0925 04:25:42.693063    5416 main.go:141] libmachine: STDERR: 
	I0925 04:25:42.693123    5416 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/custom-flannel-570000/disk.qcow2 +20000M
	I0925 04:25:42.700294    5416 main.go:141] libmachine: STDOUT: Image resized.
	
	I0925 04:25:42.700305    5416 main.go:141] libmachine: STDERR: 
	I0925 04:25:42.700319    5416 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/custom-flannel-570000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/custom-flannel-570000/disk.qcow2
	I0925 04:25:42.700327    5416 main.go:141] libmachine: Starting QEMU VM...
	I0925 04:25:42.700371    5416 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/custom-flannel-570000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/custom-flannel-570000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/custom-flannel-570000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:2d:c0:ad:ba:18 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/custom-flannel-570000/disk.qcow2
	I0925 04:25:42.701888    5416 main.go:141] libmachine: STDOUT: 
	I0925 04:25:42.701902    5416 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 04:25:42.701915    5416 client.go:171] LocalClient.Create took 257.268542ms
	I0925 04:25:44.704137    5416 start.go:128] duration metric: createHost completed in 2.312774667s
	I0925 04:25:44.704233    5416 start.go:83] releasing machines lock for "custom-flannel-570000", held for 2.313360792s
	W0925 04:25:44.704693    5416 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-570000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-570000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 04:25:44.713554    5416 out.go:177] 
	W0925 04:25:44.717558    5416 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0925 04:25:44.717607    5416 out.go:239] * 
	* 
	W0925 04:25:44.720483    5416 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0925 04:25:44.729502    5416 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-570000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-570000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.687808541s)

                                                
                                                
-- stdout --
	* [false-570000] minikube v1.31.2 on Darwin 13.6 (arm64)
	  - MINIKUBE_LOCATION=17297
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17297-1010/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17297-1010/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node false-570000 in cluster false-570000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-570000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 04:25:47.009618    5540 out.go:296] Setting OutFile to fd 1 ...
	I0925 04:25:47.009737    5540 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 04:25:47.009740    5540 out.go:309] Setting ErrFile to fd 2...
	I0925 04:25:47.009742    5540 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 04:25:47.009878    5540 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17297-1010/.minikube/bin
	I0925 04:25:47.010962    5540 out.go:303] Setting JSON to false
	I0925 04:25:47.026174    5540 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3322,"bootTime":1695637825,"procs":415,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.6","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0925 04:25:47.026254    5540 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0925 04:25:47.030838    5540 out.go:177] * [false-570000] minikube v1.31.2 on Darwin 13.6 (arm64)
	I0925 04:25:47.038677    5540 out.go:177]   - MINIKUBE_LOCATION=17297
	I0925 04:25:47.042702    5540 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17297-1010/kubeconfig
	I0925 04:25:47.038722    5540 notify.go:220] Checking for updates...
	I0925 04:25:47.048646    5540 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0925 04:25:47.051719    5540 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 04:25:47.054716    5540 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17297-1010/.minikube
	I0925 04:25:47.057654    5540 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0925 04:25:47.060972    5540 config.go:182] Loaded profile config "multinode-352000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0925 04:25:47.061015    5540 driver.go:373] Setting default libvirt URI to qemu:///system
	I0925 04:25:47.065749    5540 out.go:177] * Using the qemu2 driver based on user configuration
	I0925 04:25:47.072666    5540 start.go:298] selected driver: qemu2
	I0925 04:25:47.072672    5540 start.go:902] validating driver "qemu2" against <nil>
	I0925 04:25:47.072677    5540 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0925 04:25:47.074810    5540 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0925 04:25:47.078706    5540 out.go:177] * Automatically selected the socket_vmnet network
	I0925 04:25:47.081816    5540 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0925 04:25:47.081850    5540 cni.go:84] Creating CNI manager for "false"
	I0925 04:25:47.081855    5540 start_flags.go:321] config:
	{Name:false-570000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:false-570000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPI
D:0 AutoPauseInterval:1m0s}
	I0925 04:25:47.086169    5540 iso.go:125] acquiring lock: {Name:mkf881a60cf9fd1672567914305ff6f7a4f13809 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 04:25:47.093684    5540 out.go:177] * Starting control plane node false-570000 in cluster false-570000
	I0925 04:25:47.097686    5540 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0925 04:25:47.097701    5540 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I0925 04:25:47.097709    5540 cache.go:57] Caching tarball of preloaded images
	I0925 04:25:47.097764    5540 preload.go:174] Found /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0925 04:25:47.097769    5540 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I0925 04:25:47.097833    5540 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/false-570000/config.json ...
	I0925 04:25:47.097845    5540 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/false-570000/config.json: {Name:mkc9a1957959f768aae620c6f9ac5c4486ba2606 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 04:25:47.098056    5540 start.go:365] acquiring machines lock for false-570000: {Name:mk918d99818e2cf21e5912bc291ed18d4b442ba6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 04:25:47.098087    5540 start.go:369] acquired machines lock for "false-570000" in 25.458µs
	I0925 04:25:47.098096    5540 start.go:93] Provisioning new machine with config: &{Name:false-570000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.2 ClusterName:false-570000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 04:25:47.098124    5540 start.go:125] createHost starting for "" (driver="qemu2")
	I0925 04:25:47.106688    5540 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0925 04:25:47.122722    5540 start.go:159] libmachine.API.Create for "false-570000" (driver="qemu2")
	I0925 04:25:47.122748    5540 client.go:168] LocalClient.Create starting
	I0925 04:25:47.122842    5540 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/ca.pem
	I0925 04:25:47.122877    5540 main.go:141] libmachine: Decoding PEM data...
	I0925 04:25:47.122890    5540 main.go:141] libmachine: Parsing certificate...
	I0925 04:25:47.122927    5540 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/cert.pem
	I0925 04:25:47.122951    5540 main.go:141] libmachine: Decoding PEM data...
	I0925 04:25:47.122960    5540 main.go:141] libmachine: Parsing certificate...
	I0925 04:25:47.123336    5540 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17297-1010/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I0925 04:25:47.241441    5540 main.go:141] libmachine: Creating SSH key...
	I0925 04:25:47.316871    5540 main.go:141] libmachine: Creating Disk image...
	I0925 04:25:47.316876    5540 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0925 04:25:47.317017    5540 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/false-570000/disk.qcow2.raw /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/false-570000/disk.qcow2
	I0925 04:25:47.325572    5540 main.go:141] libmachine: STDOUT: 
	I0925 04:25:47.325584    5540 main.go:141] libmachine: STDERR: 
	I0925 04:25:47.325639    5540 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/false-570000/disk.qcow2 +20000M
	I0925 04:25:47.332861    5540 main.go:141] libmachine: STDOUT: Image resized.
	
	I0925 04:25:47.332880    5540 main.go:141] libmachine: STDERR: 
	I0925 04:25:47.332893    5540 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/false-570000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/false-570000/disk.qcow2
	I0925 04:25:47.332900    5540 main.go:141] libmachine: Starting QEMU VM...
	I0925 04:25:47.332931    5540 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/false-570000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/false-570000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/false-570000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:62:9a:be:0d:96 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/false-570000/disk.qcow2
	I0925 04:25:47.334442    5540 main.go:141] libmachine: STDOUT: 
	I0925 04:25:47.334454    5540 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 04:25:47.334472    5540 client.go:171] LocalClient.Create took 211.717458ms
	I0925 04:25:49.336651    5540 start.go:128] duration metric: createHost completed in 2.238507125s
	I0925 04:25:49.336717    5540 start.go:83] releasing machines lock for "false-570000", held for 2.238617916s
	W0925 04:25:49.336758    5540 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 04:25:49.342948    5540 out.go:177] * Deleting "false-570000" in qemu2 ...
	W0925 04:25:49.366673    5540 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 04:25:49.366698    5540 start.go:703] Will try again in 5 seconds ...
	I0925 04:25:54.368966    5540 start.go:365] acquiring machines lock for false-570000: {Name:mk918d99818e2cf21e5912bc291ed18d4b442ba6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 04:25:54.369455    5540 start.go:369] acquired machines lock for "false-570000" in 377.375µs
	I0925 04:25:54.369607    5540 start.go:93] Provisioning new machine with config: &{Name:false-570000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.2 ClusterName:false-570000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 04:25:54.369903    5540 start.go:125] createHost starting for "" (driver="qemu2")
	I0925 04:25:54.375661    5540 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0925 04:25:54.422562    5540 start.go:159] libmachine.API.Create for "false-570000" (driver="qemu2")
	I0925 04:25:54.422615    5540 client.go:168] LocalClient.Create starting
	I0925 04:25:54.422772    5540 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/ca.pem
	I0925 04:25:54.422837    5540 main.go:141] libmachine: Decoding PEM data...
	I0925 04:25:54.422857    5540 main.go:141] libmachine: Parsing certificate...
	I0925 04:25:54.422922    5540 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/cert.pem
	I0925 04:25:54.422962    5540 main.go:141] libmachine: Decoding PEM data...
	I0925 04:25:54.422980    5540 main.go:141] libmachine: Parsing certificate...
	I0925 04:25:54.423532    5540 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17297-1010/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I0925 04:25:54.549843    5540 main.go:141] libmachine: Creating SSH key...
	I0925 04:25:54.612253    5540 main.go:141] libmachine: Creating Disk image...
	I0925 04:25:54.612262    5540 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0925 04:25:54.612395    5540 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/false-570000/disk.qcow2.raw /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/false-570000/disk.qcow2
	I0925 04:25:54.620854    5540 main.go:141] libmachine: STDOUT: 
	I0925 04:25:54.620871    5540 main.go:141] libmachine: STDERR: 
	I0925 04:25:54.620942    5540 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/false-570000/disk.qcow2 +20000M
	I0925 04:25:54.628311    5540 main.go:141] libmachine: STDOUT: Image resized.
	
	I0925 04:25:54.628324    5540 main.go:141] libmachine: STDERR: 
	I0925 04:25:54.628338    5540 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/false-570000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/false-570000/disk.qcow2
	I0925 04:25:54.628348    5540 main.go:141] libmachine: Starting QEMU VM...
	I0925 04:25:54.628393    5540 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/false-570000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/false-570000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/false-570000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:2e:59:f7:e7:11 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/false-570000/disk.qcow2
	I0925 04:25:54.629937    5540 main.go:141] libmachine: STDOUT: 
	I0925 04:25:54.629949    5540 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 04:25:54.629963    5540 client.go:171] LocalClient.Create took 207.342416ms
	I0925 04:25:56.632133    5540 start.go:128] duration metric: createHost completed in 2.262194166s
	I0925 04:25:56.632196    5540 start.go:83] releasing machines lock for "false-570000", held for 2.26271625s
	W0925 04:25:56.632697    5540 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p false-570000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-570000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 04:25:56.642259    5540 out.go:177] 
	W0925 04:25:56.646429    5540 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0925 04:25:56.646452    5540 out.go:239] * 
	* 
	W0925 04:25:56.649143    5540 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0925 04:25:56.659328    5540 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-570000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-570000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.785863625s)

                                                
                                                
-- stdout --
	* [kindnet-570000] minikube v1.31.2 on Darwin 13.6 (arm64)
	  - MINIKUBE_LOCATION=17297
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17297-1010/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17297-1010/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node kindnet-570000 in cluster kindnet-570000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-570000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 04:25:58.780930    5652 out.go:296] Setting OutFile to fd 1 ...
	I0925 04:25:58.781048    5652 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 04:25:58.781051    5652 out.go:309] Setting ErrFile to fd 2...
	I0925 04:25:58.781053    5652 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 04:25:58.781175    5652 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17297-1010/.minikube/bin
	I0925 04:25:58.782175    5652 out.go:303] Setting JSON to false
	I0925 04:25:58.797332    5652 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3333,"bootTime":1695637825,"procs":414,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.6","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0925 04:25:58.797406    5652 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0925 04:25:58.801603    5652 out.go:177] * [kindnet-570000] minikube v1.31.2 on Darwin 13.6 (arm64)
	I0925 04:25:58.809596    5652 out.go:177]   - MINIKUBE_LOCATION=17297
	I0925 04:25:58.809677    5652 notify.go:220] Checking for updates...
	I0925 04:25:58.816510    5652 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17297-1010/kubeconfig
	I0925 04:25:58.819559    5652 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0925 04:25:58.822546    5652 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 04:25:58.825563    5652 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17297-1010/.minikube
	I0925 04:25:58.828572    5652 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0925 04:25:58.831854    5652 config.go:182] Loaded profile config "multinode-352000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0925 04:25:58.831901    5652 driver.go:373] Setting default libvirt URI to qemu:///system
	I0925 04:25:58.836521    5652 out.go:177] * Using the qemu2 driver based on user configuration
	I0925 04:25:58.842529    5652 start.go:298] selected driver: qemu2
	I0925 04:25:58.842537    5652 start.go:902] validating driver "qemu2" against <nil>
	I0925 04:25:58.842549    5652 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0925 04:25:58.844558    5652 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0925 04:25:58.847503    5652 out.go:177] * Automatically selected the socket_vmnet network
	I0925 04:25:58.850592    5652 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0925 04:25:58.850621    5652 cni.go:84] Creating CNI manager for "kindnet"
	I0925 04:25:58.850625    5652 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0925 04:25:58.850632    5652 start_flags.go:321] config:
	{Name:kindnet-570000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:kindnet-570000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: S
SHAgentPID:0 AutoPauseInterval:1m0s}
	I0925 04:25:58.854963    5652 iso.go:125] acquiring lock: {Name:mkf881a60cf9fd1672567914305ff6f7a4f13809 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 04:25:58.862533    5652 out.go:177] * Starting control plane node kindnet-570000 in cluster kindnet-570000
	I0925 04:25:58.866575    5652 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0925 04:25:58.866594    5652 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I0925 04:25:58.866611    5652 cache.go:57] Caching tarball of preloaded images
	I0925 04:25:58.866674    5652 preload.go:174] Found /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0925 04:25:58.866680    5652 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I0925 04:25:58.866744    5652 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/kindnet-570000/config.json ...
	I0925 04:25:58.866766    5652 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/kindnet-570000/config.json: {Name:mk0232b90cf240e5dd2b7207e4e584bdd2e76332 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 04:25:58.866983    5652 start.go:365] acquiring machines lock for kindnet-570000: {Name:mk918d99818e2cf21e5912bc291ed18d4b442ba6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 04:25:58.867013    5652 start.go:369] acquired machines lock for "kindnet-570000" in 24.459µs
	I0925 04:25:58.867022    5652 start.go:93] Provisioning new machine with config: &{Name:kindnet-570000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.2 ClusterName:kindnet-570000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 04:25:58.867064    5652 start.go:125] createHost starting for "" (driver="qemu2")
	I0925 04:25:58.875541    5652 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0925 04:25:58.891731    5652 start.go:159] libmachine.API.Create for "kindnet-570000" (driver="qemu2")
	I0925 04:25:58.891755    5652 client.go:168] LocalClient.Create starting
	I0925 04:25:58.891821    5652 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/ca.pem
	I0925 04:25:58.891847    5652 main.go:141] libmachine: Decoding PEM data...
	I0925 04:25:58.891860    5652 main.go:141] libmachine: Parsing certificate...
	I0925 04:25:58.891900    5652 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/cert.pem
	I0925 04:25:58.891922    5652 main.go:141] libmachine: Decoding PEM data...
	I0925 04:25:58.891931    5652 main.go:141] libmachine: Parsing certificate...
	I0925 04:25:58.892287    5652 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17297-1010/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I0925 04:25:59.007312    5652 main.go:141] libmachine: Creating SSH key...
	I0925 04:25:59.151305    5652 main.go:141] libmachine: Creating Disk image...
	I0925 04:25:59.151313    5652 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0925 04:25:59.151454    5652 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/kindnet-570000/disk.qcow2.raw /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/kindnet-570000/disk.qcow2
	I0925 04:25:59.159991    5652 main.go:141] libmachine: STDOUT: 
	I0925 04:25:59.160009    5652 main.go:141] libmachine: STDERR: 
	I0925 04:25:59.160073    5652 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/kindnet-570000/disk.qcow2 +20000M
	I0925 04:25:59.167145    5652 main.go:141] libmachine: STDOUT: Image resized.
	
	I0925 04:25:59.167158    5652 main.go:141] libmachine: STDERR: 
	I0925 04:25:59.167175    5652 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/kindnet-570000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/kindnet-570000/disk.qcow2
	I0925 04:25:59.167185    5652 main.go:141] libmachine: Starting QEMU VM...
	I0925 04:25:59.167226    5652 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/kindnet-570000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/kindnet-570000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/kindnet-570000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:29:cc:20:1d:4d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/kindnet-570000/disk.qcow2
	I0925 04:25:59.168710    5652 main.go:141] libmachine: STDOUT: 
	I0925 04:25:59.168724    5652 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 04:25:59.168745    5652 client.go:171] LocalClient.Create took 276.981667ms
	I0925 04:26:01.170958    5652 start.go:128] duration metric: createHost completed in 2.303873208s
	I0925 04:26:01.171018    5652 start.go:83] releasing machines lock for "kindnet-570000", held for 2.303990542s
	W0925 04:26:01.171113    5652 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 04:26:01.179389    5652 out.go:177] * Deleting "kindnet-570000" in qemu2 ...
	W0925 04:26:01.201144    5652 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 04:26:01.201179    5652 start.go:703] Will try again in 5 seconds ...
	I0925 04:26:06.203458    5652 start.go:365] acquiring machines lock for kindnet-570000: {Name:mk918d99818e2cf21e5912bc291ed18d4b442ba6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 04:26:06.203961    5652 start.go:369] acquired machines lock for "kindnet-570000" in 394.792µs
	I0925 04:26:06.204088    5652 start.go:93] Provisioning new machine with config: &{Name:kindnet-570000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.2 ClusterName:kindnet-570000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 04:26:06.204378    5652 start.go:125] createHost starting for "" (driver="qemu2")
	I0925 04:26:06.210157    5652 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0925 04:26:06.256436    5652 start.go:159] libmachine.API.Create for "kindnet-570000" (driver="qemu2")
	I0925 04:26:06.256476    5652 client.go:168] LocalClient.Create starting
	I0925 04:26:06.256593    5652 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/ca.pem
	I0925 04:26:06.256646    5652 main.go:141] libmachine: Decoding PEM data...
	I0925 04:26:06.256671    5652 main.go:141] libmachine: Parsing certificate...
	I0925 04:26:06.256736    5652 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/cert.pem
	I0925 04:26:06.256770    5652 main.go:141] libmachine: Decoding PEM data...
	I0925 04:26:06.256783    5652 main.go:141] libmachine: Parsing certificate...
	I0925 04:26:06.257326    5652 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17297-1010/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I0925 04:26:06.384961    5652 main.go:141] libmachine: Creating SSH key...
	I0925 04:26:06.483894    5652 main.go:141] libmachine: Creating Disk image...
	I0925 04:26:06.483899    5652 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0925 04:26:06.484060    5652 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/kindnet-570000/disk.qcow2.raw /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/kindnet-570000/disk.qcow2
	I0925 04:26:06.492891    5652 main.go:141] libmachine: STDOUT: 
	I0925 04:26:06.492913    5652 main.go:141] libmachine: STDERR: 
	I0925 04:26:06.492969    5652 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/kindnet-570000/disk.qcow2 +20000M
	I0925 04:26:06.500281    5652 main.go:141] libmachine: STDOUT: Image resized.
	
	I0925 04:26:06.500294    5652 main.go:141] libmachine: STDERR: 
	I0925 04:26:06.500306    5652 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/kindnet-570000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/kindnet-570000/disk.qcow2
	I0925 04:26:06.500312    5652 main.go:141] libmachine: Starting QEMU VM...
	I0925 04:26:06.500347    5652 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/kindnet-570000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/kindnet-570000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/kindnet-570000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:69:0d:82:f5:07 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/kindnet-570000/disk.qcow2
	I0925 04:26:06.501887    5652 main.go:141] libmachine: STDOUT: 
	I0925 04:26:06.501900    5652 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 04:26:06.501921    5652 client.go:171] LocalClient.Create took 245.430375ms
	I0925 04:26:08.504089    5652 start.go:128] duration metric: createHost completed in 2.299682667s
	I0925 04:26:08.504185    5652 start.go:83] releasing machines lock for "kindnet-570000", held for 2.30017325s
	W0925 04:26:08.504562    5652 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-570000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-570000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 04:26:08.512026    5652 out.go:177] 
	W0925 04:26:08.516175    5652 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0925 04:26:08.516226    5652 out.go:239] * 
	* 
	W0925 04:26:08.519125    5652 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0925 04:26:08.527061    5652 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-570000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-570000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.713605666s)

                                                
                                                
-- stdout --
	* [flannel-570000] minikube v1.31.2 on Darwin 13.6 (arm64)
	  - MINIKUBE_LOCATION=17297
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17297-1010/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17297-1010/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node flannel-570000 in cluster flannel-570000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-570000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 04:26:10.753667    5768 out.go:296] Setting OutFile to fd 1 ...
	I0925 04:26:10.753789    5768 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 04:26:10.753792    5768 out.go:309] Setting ErrFile to fd 2...
	I0925 04:26:10.753796    5768 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 04:26:10.753936    5768 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17297-1010/.minikube/bin
	I0925 04:26:10.754968    5768 out.go:303] Setting JSON to false
	I0925 04:26:10.770174    5768 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3345,"bootTime":1695637825,"procs":412,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.6","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0925 04:26:10.770270    5768 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0925 04:26:10.775598    5768 out.go:177] * [flannel-570000] minikube v1.31.2 on Darwin 13.6 (arm64)
	I0925 04:26:10.782576    5768 out.go:177]   - MINIKUBE_LOCATION=17297
	I0925 04:26:10.782618    5768 notify.go:220] Checking for updates...
	I0925 04:26:10.789600    5768 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17297-1010/kubeconfig
	I0925 04:26:10.792517    5768 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0925 04:26:10.795535    5768 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 04:26:10.798601    5768 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17297-1010/.minikube
	I0925 04:26:10.801508    5768 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0925 04:26:10.804965    5768 config.go:182] Loaded profile config "multinode-352000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0925 04:26:10.805017    5768 driver.go:373] Setting default libvirt URI to qemu:///system
	I0925 04:26:10.809578    5768 out.go:177] * Using the qemu2 driver based on user configuration
	I0925 04:26:10.816565    5768 start.go:298] selected driver: qemu2
	I0925 04:26:10.816572    5768 start.go:902] validating driver "qemu2" against <nil>
	I0925 04:26:10.816579    5768 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0925 04:26:10.818552    5768 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0925 04:26:10.821613    5768 out.go:177] * Automatically selected the socket_vmnet network
	I0925 04:26:10.823089    5768 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0925 04:26:10.823108    5768 cni.go:84] Creating CNI manager for "flannel"
	I0925 04:26:10.823114    5768 start_flags.go:316] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0925 04:26:10.823120    5768 start_flags.go:321] config:
	{Name:flannel-570000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:flannel-570000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: S
SHAgentPID:0 AutoPauseInterval:1m0s}
	I0925 04:26:10.827373    5768 iso.go:125] acquiring lock: {Name:mkf881a60cf9fd1672567914305ff6f7a4f13809 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 04:26:10.834570    5768 out.go:177] * Starting control plane node flannel-570000 in cluster flannel-570000
	I0925 04:26:10.838498    5768 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0925 04:26:10.838516    5768 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I0925 04:26:10.838532    5768 cache.go:57] Caching tarball of preloaded images
	I0925 04:26:10.838598    5768 preload.go:174] Found /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0925 04:26:10.838604    5768 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I0925 04:26:10.838671    5768 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/flannel-570000/config.json ...
	I0925 04:26:10.838689    5768 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/flannel-570000/config.json: {Name:mk34dc35ecf721f42b2984e5deb020cfbbc17f8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 04:26:10.838892    5768 start.go:365] acquiring machines lock for flannel-570000: {Name:mk918d99818e2cf21e5912bc291ed18d4b442ba6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 04:26:10.838921    5768 start.go:369] acquired machines lock for "flannel-570000" in 23.833µs
	I0925 04:26:10.838931    5768 start.go:93] Provisioning new machine with config: &{Name:flannel-570000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.2 ClusterName:flannel-570000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 04:26:10.838957    5768 start.go:125] createHost starting for "" (driver="qemu2")
	I0925 04:26:10.842640    5768 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0925 04:26:10.857253    5768 start.go:159] libmachine.API.Create for "flannel-570000" (driver="qemu2")
	I0925 04:26:10.857272    5768 client.go:168] LocalClient.Create starting
	I0925 04:26:10.857318    5768 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/ca.pem
	I0925 04:26:10.857342    5768 main.go:141] libmachine: Decoding PEM data...
	I0925 04:26:10.857351    5768 main.go:141] libmachine: Parsing certificate...
	I0925 04:26:10.857387    5768 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/cert.pem
	I0925 04:26:10.857405    5768 main.go:141] libmachine: Decoding PEM data...
	I0925 04:26:10.857411    5768 main.go:141] libmachine: Parsing certificate...
	I0925 04:26:10.857732    5768 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17297-1010/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I0925 04:26:10.976215    5768 main.go:141] libmachine: Creating SSH key...
	I0925 04:26:11.107811    5768 main.go:141] libmachine: Creating Disk image...
	I0925 04:26:11.107818    5768 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0925 04:26:11.107964    5768 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/flannel-570000/disk.qcow2.raw /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/flannel-570000/disk.qcow2
	I0925 04:26:11.116903    5768 main.go:141] libmachine: STDOUT: 
	I0925 04:26:11.116915    5768 main.go:141] libmachine: STDERR: 
	I0925 04:26:11.116970    5768 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/flannel-570000/disk.qcow2 +20000M
	I0925 04:26:11.124220    5768 main.go:141] libmachine: STDOUT: Image resized.
	
	I0925 04:26:11.124250    5768 main.go:141] libmachine: STDERR: 
	I0925 04:26:11.124273    5768 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/flannel-570000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/flannel-570000/disk.qcow2
	I0925 04:26:11.124281    5768 main.go:141] libmachine: Starting QEMU VM...
	I0925 04:26:11.124318    5768 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/flannel-570000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/flannel-570000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/flannel-570000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:72:18:99:27:d4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/flannel-570000/disk.qcow2
	I0925 04:26:11.125958    5768 main.go:141] libmachine: STDOUT: 
	I0925 04:26:11.125971    5768 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 04:26:11.125990    5768 client.go:171] LocalClient.Create took 268.713833ms
	I0925 04:26:13.128167    5768 start.go:128] duration metric: createHost completed in 2.289185834s
	I0925 04:26:13.128247    5768 start.go:83] releasing machines lock for "flannel-570000", held for 2.289315375s
	W0925 04:26:13.128308    5768 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 04:26:13.139352    5768 out.go:177] * Deleting "flannel-570000" in qemu2 ...
	W0925 04:26:13.159792    5768 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 04:26:13.159817    5768 start.go:703] Will try again in 5 seconds ...
	I0925 04:26:18.160850    5768 start.go:365] acquiring machines lock for flannel-570000: {Name:mk918d99818e2cf21e5912bc291ed18d4b442ba6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 04:26:18.161340    5768 start.go:369] acquired machines lock for "flannel-570000" in 348.208µs
	I0925 04:26:18.161481    5768 start.go:93] Provisioning new machine with config: &{Name:flannel-570000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.2 ClusterName:flannel-570000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 04:26:18.161872    5768 start.go:125] createHost starting for "" (driver="qemu2")
	I0925 04:26:18.170522    5768 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0925 04:26:18.215261    5768 start.go:159] libmachine.API.Create for "flannel-570000" (driver="qemu2")
	I0925 04:26:18.215304    5768 client.go:168] LocalClient.Create starting
	I0925 04:26:18.215417    5768 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/ca.pem
	I0925 04:26:18.215464    5768 main.go:141] libmachine: Decoding PEM data...
	I0925 04:26:18.215490    5768 main.go:141] libmachine: Parsing certificate...
	I0925 04:26:18.215562    5768 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/cert.pem
	I0925 04:26:18.215597    5768 main.go:141] libmachine: Decoding PEM data...
	I0925 04:26:18.215610    5768 main.go:141] libmachine: Parsing certificate...
	I0925 04:26:18.216103    5768 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17297-1010/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I0925 04:26:18.347744    5768 main.go:141] libmachine: Creating SSH key...
	I0925 04:26:18.382030    5768 main.go:141] libmachine: Creating Disk image...
	I0925 04:26:18.382035    5768 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0925 04:26:18.382167    5768 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/flannel-570000/disk.qcow2.raw /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/flannel-570000/disk.qcow2
	I0925 04:26:18.390668    5768 main.go:141] libmachine: STDOUT: 
	I0925 04:26:18.390681    5768 main.go:141] libmachine: STDERR: 
	I0925 04:26:18.390725    5768 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/flannel-570000/disk.qcow2 +20000M
	I0925 04:26:18.397939    5768 main.go:141] libmachine: STDOUT: Image resized.
	
	I0925 04:26:18.397953    5768 main.go:141] libmachine: STDERR: 
	I0925 04:26:18.397978    5768 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/flannel-570000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/flannel-570000/disk.qcow2
	I0925 04:26:18.397986    5768 main.go:141] libmachine: Starting QEMU VM...
	I0925 04:26:18.398023    5768 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/flannel-570000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/flannel-570000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/flannel-570000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:e1:bf:e6:a3:4c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/flannel-570000/disk.qcow2
	I0925 04:26:18.399548    5768 main.go:141] libmachine: STDOUT: 
	I0925 04:26:18.399561    5768 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 04:26:18.399573    5768 client.go:171] LocalClient.Create took 184.264916ms
	I0925 04:26:20.401751    5768 start.go:128] duration metric: createHost completed in 2.2398535s
	I0925 04:26:20.401816    5768 start.go:83] releasing machines lock for "flannel-570000", held for 2.240448042s
	W0925 04:26:20.402234    5768 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p flannel-570000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-570000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 04:26:20.410844    5768 out.go:177] 
	W0925 04:26:20.414921    5768 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0925 04:26:20.414969    5768 out.go:239] * 
	* 
	W0925 04:26:20.417532    5768 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0925 04:26:20.427705    5768 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-570000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-570000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.745299667s)

                                                
                                                
-- stdout --
	* [enable-default-cni-570000] minikube v1.31.2 on Darwin 13.6 (arm64)
	  - MINIKUBE_LOCATION=17297
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17297-1010/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17297-1010/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node enable-default-cni-570000 in cluster enable-default-cni-570000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-570000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 04:26:22.740914    5892 out.go:296] Setting OutFile to fd 1 ...
	I0925 04:26:22.741056    5892 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 04:26:22.741059    5892 out.go:309] Setting ErrFile to fd 2...
	I0925 04:26:22.741061    5892 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 04:26:22.741186    5892 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17297-1010/.minikube/bin
	I0925 04:26:22.742201    5892 out.go:303] Setting JSON to false
	I0925 04:26:22.757436    5892 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3357,"bootTime":1695637825,"procs":418,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.6","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0925 04:26:22.757514    5892 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0925 04:26:22.762466    5892 out.go:177] * [enable-default-cni-570000] minikube v1.31.2 on Darwin 13.6 (arm64)
	I0925 04:26:22.770654    5892 out.go:177]   - MINIKUBE_LOCATION=17297
	I0925 04:26:22.774563    5892 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17297-1010/kubeconfig
	I0925 04:26:22.770729    5892 notify.go:220] Checking for updates...
	I0925 04:26:22.780655    5892 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0925 04:26:22.783636    5892 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 04:26:22.786609    5892 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17297-1010/.minikube
	I0925 04:26:22.789659    5892 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0925 04:26:22.791496    5892 config.go:182] Loaded profile config "multinode-352000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0925 04:26:22.791547    5892 driver.go:373] Setting default libvirt URI to qemu:///system
	I0925 04:26:22.795555    5892 out.go:177] * Using the qemu2 driver based on user configuration
	I0925 04:26:22.802484    5892 start.go:298] selected driver: qemu2
	I0925 04:26:22.802491    5892 start.go:902] validating driver "qemu2" against <nil>
	I0925 04:26:22.802497    5892 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0925 04:26:22.804691    5892 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0925 04:26:22.807651    5892 out.go:177] * Automatically selected the socket_vmnet network
	E0925 04:26:22.810749    5892 start_flags.go:455] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0925 04:26:22.810762    5892 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0925 04:26:22.810779    5892 cni.go:84] Creating CNI manager for "bridge"
	I0925 04:26:22.810783    5892 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0925 04:26:22.810787    5892 start_flags.go:321] config:
	{Name:enable-default-cni-570000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:enable-default-cni-570000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Sta
ticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0925 04:26:22.814742    5892 iso.go:125] acquiring lock: {Name:mkf881a60cf9fd1672567914305ff6f7a4f13809 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 04:26:22.821637    5892 out.go:177] * Starting control plane node enable-default-cni-570000 in cluster enable-default-cni-570000
	I0925 04:26:22.825592    5892 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0925 04:26:22.825611    5892 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I0925 04:26:22.825624    5892 cache.go:57] Caching tarball of preloaded images
	I0925 04:26:22.825687    5892 preload.go:174] Found /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0925 04:26:22.825693    5892 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I0925 04:26:22.825754    5892 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/enable-default-cni-570000/config.json ...
	I0925 04:26:22.825767    5892 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/enable-default-cni-570000/config.json: {Name:mkb3920f545ec8ee4050978202bfe138b2248d4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 04:26:22.825967    5892 start.go:365] acquiring machines lock for enable-default-cni-570000: {Name:mk918d99818e2cf21e5912bc291ed18d4b442ba6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 04:26:22.826000    5892 start.go:369] acquired machines lock for "enable-default-cni-570000" in 24.667µs
	I0925 04:26:22.826010    5892 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-570000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:enable-default-cni-570000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 04:26:22.826038    5892 start.go:125] createHost starting for "" (driver="qemu2")
	I0925 04:26:22.834686    5892 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0925 04:26:22.850243    5892 start.go:159] libmachine.API.Create for "enable-default-cni-570000" (driver="qemu2")
	I0925 04:26:22.850264    5892 client.go:168] LocalClient.Create starting
	I0925 04:26:22.850324    5892 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/ca.pem
	I0925 04:26:22.850349    5892 main.go:141] libmachine: Decoding PEM data...
	I0925 04:26:22.850357    5892 main.go:141] libmachine: Parsing certificate...
	I0925 04:26:22.850397    5892 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/cert.pem
	I0925 04:26:22.850420    5892 main.go:141] libmachine: Decoding PEM data...
	I0925 04:26:22.850427    5892 main.go:141] libmachine: Parsing certificate...
	I0925 04:26:22.850780    5892 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17297-1010/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I0925 04:26:22.966566    5892 main.go:141] libmachine: Creating SSH key...
	I0925 04:26:23.017063    5892 main.go:141] libmachine: Creating Disk image...
	I0925 04:26:23.017071    5892 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0925 04:26:23.017223    5892 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/enable-default-cni-570000/disk.qcow2.raw /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/enable-default-cni-570000/disk.qcow2
	I0925 04:26:23.025621    5892 main.go:141] libmachine: STDOUT: 
	I0925 04:26:23.025638    5892 main.go:141] libmachine: STDERR: 
	I0925 04:26:23.025684    5892 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/enable-default-cni-570000/disk.qcow2 +20000M
	I0925 04:26:23.032738    5892 main.go:141] libmachine: STDOUT: Image resized.
	
	I0925 04:26:23.032755    5892 main.go:141] libmachine: STDERR: 
	I0925 04:26:23.032770    5892 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/enable-default-cni-570000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/enable-default-cni-570000/disk.qcow2
	I0925 04:26:23.032781    5892 main.go:141] libmachine: Starting QEMU VM...
	I0925 04:26:23.032818    5892 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/enable-default-cni-570000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/enable-default-cni-570000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/enable-default-cni-570000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:bd:51:30:00:1c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/enable-default-cni-570000/disk.qcow2
	I0925 04:26:23.034321    5892 main.go:141] libmachine: STDOUT: 
	I0925 04:26:23.034336    5892 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 04:26:23.034353    5892 client.go:171] LocalClient.Create took 184.085542ms
	I0925 04:26:25.036549    5892 start.go:128] duration metric: createHost completed in 2.210480917s
	I0925 04:26:25.036678    5892 start.go:83] releasing machines lock for "enable-default-cni-570000", held for 2.210618666s
	W0925 04:26:25.036726    5892 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 04:26:25.046087    5892 out.go:177] * Deleting "enable-default-cni-570000" in qemu2 ...
	W0925 04:26:25.070689    5892 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 04:26:25.070726    5892 start.go:703] Will try again in 5 seconds ...
	I0925 04:26:30.072910    5892 start.go:365] acquiring machines lock for enable-default-cni-570000: {Name:mk918d99818e2cf21e5912bc291ed18d4b442ba6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 04:26:30.073409    5892 start.go:369] acquired machines lock for "enable-default-cni-570000" in 424.917µs
	I0925 04:26:30.073560    5892 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-570000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:enable-default-cni-570000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 04:26:30.073811    5892 start.go:125] createHost starting for "" (driver="qemu2")
	I0925 04:26:30.082485    5892 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0925 04:26:30.128280    5892 start.go:159] libmachine.API.Create for "enable-default-cni-570000" (driver="qemu2")
	I0925 04:26:30.128314    5892 client.go:168] LocalClient.Create starting
	I0925 04:26:30.128454    5892 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/ca.pem
	I0925 04:26:30.128519    5892 main.go:141] libmachine: Decoding PEM data...
	I0925 04:26:30.128537    5892 main.go:141] libmachine: Parsing certificate...
	I0925 04:26:30.128618    5892 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/cert.pem
	I0925 04:26:30.128653    5892 main.go:141] libmachine: Decoding PEM data...
	I0925 04:26:30.128667    5892 main.go:141] libmachine: Parsing certificate...
	I0925 04:26:30.129225    5892 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17297-1010/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I0925 04:26:30.262797    5892 main.go:141] libmachine: Creating SSH key...
	I0925 04:26:30.398310    5892 main.go:141] libmachine: Creating Disk image...
	I0925 04:26:30.398316    5892 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0925 04:26:30.398467    5892 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/enable-default-cni-570000/disk.qcow2.raw /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/enable-default-cni-570000/disk.qcow2
	I0925 04:26:30.407412    5892 main.go:141] libmachine: STDOUT: 
	I0925 04:26:30.407434    5892 main.go:141] libmachine: STDERR: 
	I0925 04:26:30.407494    5892 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/enable-default-cni-570000/disk.qcow2 +20000M
	I0925 04:26:30.414640    5892 main.go:141] libmachine: STDOUT: Image resized.
	
	I0925 04:26:30.414659    5892 main.go:141] libmachine: STDERR: 
	I0925 04:26:30.414678    5892 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/enable-default-cni-570000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/enable-default-cni-570000/disk.qcow2
	I0925 04:26:30.414684    5892 main.go:141] libmachine: Starting QEMU VM...
	I0925 04:26:30.414718    5892 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/enable-default-cni-570000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/enable-default-cni-570000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/enable-default-cni-570000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:2f:7d:2f:bf:28 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/enable-default-cni-570000/disk.qcow2
	I0925 04:26:30.416213    5892 main.go:141] libmachine: STDOUT: 
	I0925 04:26:30.416236    5892 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 04:26:30.416256    5892 client.go:171] LocalClient.Create took 287.936458ms
	I0925 04:26:32.418459    5892 start.go:128] duration metric: createHost completed in 2.344582042s
	I0925 04:26:32.418538    5892 start.go:83] releasing machines lock for "enable-default-cni-570000", held for 2.345103916s
	W0925 04:26:32.418978    5892 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-570000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-570000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 04:26:32.429630    5892 out.go:177] 
	W0925 04:26:32.433689    5892 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0925 04:26:32.433726    5892 out.go:239] * 
	* 
	W0925 04:26:32.436406    5892 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0925 04:26:32.446716    5892 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-570000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-570000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.817294625s)

                                                
                                                
-- stdout --
	* [bridge-570000] minikube v1.31.2 on Darwin 13.6 (arm64)
	  - MINIKUBE_LOCATION=17297
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17297-1010/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17297-1010/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node bridge-570000 in cluster bridge-570000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-570000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 04:26:34.582371    6005 out.go:296] Setting OutFile to fd 1 ...
	I0925 04:26:34.582503    6005 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 04:26:34.582506    6005 out.go:309] Setting ErrFile to fd 2...
	I0925 04:26:34.582508    6005 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 04:26:34.582632    6005 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17297-1010/.minikube/bin
	I0925 04:26:34.583637    6005 out.go:303] Setting JSON to false
	I0925 04:26:34.598561    6005 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3369,"bootTime":1695637825,"procs":417,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.6","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0925 04:26:34.598643    6005 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0925 04:26:34.603506    6005 out.go:177] * [bridge-570000] minikube v1.31.2 on Darwin 13.6 (arm64)
	I0925 04:26:34.611466    6005 out.go:177]   - MINIKUBE_LOCATION=17297
	I0925 04:26:34.611526    6005 notify.go:220] Checking for updates...
	I0925 04:26:34.615431    6005 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17297-1010/kubeconfig
	I0925 04:26:34.618476    6005 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0925 04:26:34.621415    6005 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 04:26:34.624467    6005 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17297-1010/.minikube
	I0925 04:26:34.627480    6005 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0925 04:26:34.630811    6005 config.go:182] Loaded profile config "multinode-352000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0925 04:26:34.630856    6005 driver.go:373] Setting default libvirt URI to qemu:///system
	I0925 04:26:34.635426    6005 out.go:177] * Using the qemu2 driver based on user configuration
	I0925 04:26:34.642381    6005 start.go:298] selected driver: qemu2
	I0925 04:26:34.642388    6005 start.go:902] validating driver "qemu2" against <nil>
	I0925 04:26:34.642399    6005 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0925 04:26:34.644344    6005 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0925 04:26:34.647442    6005 out.go:177] * Automatically selected the socket_vmnet network
	I0925 04:26:34.650509    6005 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0925 04:26:34.650541    6005 cni.go:84] Creating CNI manager for "bridge"
	I0925 04:26:34.650546    6005 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0925 04:26:34.650553    6005 start_flags.go:321] config:
	{Name:bridge-570000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:bridge-570000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHA
gentPID:0 AutoPauseInterval:1m0s}
	I0925 04:26:34.654784    6005 iso.go:125] acquiring lock: {Name:mkf881a60cf9fd1672567914305ff6f7a4f13809 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 04:26:34.661383    6005 out.go:177] * Starting control plane node bridge-570000 in cluster bridge-570000
	I0925 04:26:34.665440    6005 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0925 04:26:34.665457    6005 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I0925 04:26:34.665468    6005 cache.go:57] Caching tarball of preloaded images
	I0925 04:26:34.665520    6005 preload.go:174] Found /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0925 04:26:34.665525    6005 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I0925 04:26:34.665580    6005 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/bridge-570000/config.json ...
	I0925 04:26:34.665593    6005 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/bridge-570000/config.json: {Name:mk4252020dad4370e5ef495427a13f96fd32e508 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 04:26:34.665793    6005 start.go:365] acquiring machines lock for bridge-570000: {Name:mk918d99818e2cf21e5912bc291ed18d4b442ba6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 04:26:34.665822    6005 start.go:369] acquired machines lock for "bridge-570000" in 23.625µs
	I0925 04:26:34.665831    6005 start.go:93] Provisioning new machine with config: &{Name:bridge-570000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.2 ClusterName:bridge-570000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 04:26:34.665862    6005 start.go:125] createHost starting for "" (driver="qemu2")
	I0925 04:26:34.673425    6005 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0925 04:26:34.689162    6005 start.go:159] libmachine.API.Create for "bridge-570000" (driver="qemu2")
	I0925 04:26:34.689190    6005 client.go:168] LocalClient.Create starting
	I0925 04:26:34.689251    6005 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/ca.pem
	I0925 04:26:34.689274    6005 main.go:141] libmachine: Decoding PEM data...
	I0925 04:26:34.689283    6005 main.go:141] libmachine: Parsing certificate...
	I0925 04:26:34.689320    6005 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/cert.pem
	I0925 04:26:34.689338    6005 main.go:141] libmachine: Decoding PEM data...
	I0925 04:26:34.689344    6005 main.go:141] libmachine: Parsing certificate...
	I0925 04:26:34.689662    6005 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17297-1010/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I0925 04:26:34.807261    6005 main.go:141] libmachine: Creating SSH key...
	I0925 04:26:34.950712    6005 main.go:141] libmachine: Creating Disk image...
	I0925 04:26:34.950720    6005 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0925 04:26:34.950879    6005 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/bridge-570000/disk.qcow2.raw /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/bridge-570000/disk.qcow2
	I0925 04:26:34.959403    6005 main.go:141] libmachine: STDOUT: 
	I0925 04:26:34.959420    6005 main.go:141] libmachine: STDERR: 
	I0925 04:26:34.959468    6005 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/bridge-570000/disk.qcow2 +20000M
	I0925 04:26:34.966682    6005 main.go:141] libmachine: STDOUT: Image resized.
	
	I0925 04:26:34.966699    6005 main.go:141] libmachine: STDERR: 
	I0925 04:26:34.966714    6005 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/bridge-570000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/bridge-570000/disk.qcow2
	I0925 04:26:34.966720    6005 main.go:141] libmachine: Starting QEMU VM...
	I0925 04:26:34.966767    6005 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/bridge-570000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/bridge-570000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/bridge-570000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:45:07:3b:de:3d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/bridge-570000/disk.qcow2
	I0925 04:26:34.968301    6005 main.go:141] libmachine: STDOUT: 
	I0925 04:26:34.968315    6005 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 04:26:34.968334    6005 client.go:171] LocalClient.Create took 279.139125ms
	I0925 04:26:36.970555    6005 start.go:128] duration metric: createHost completed in 2.30465375s
	I0925 04:26:36.970650    6005 start.go:83] releasing machines lock for "bridge-570000", held for 2.304816625s
	W0925 04:26:36.970700    6005 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 04:26:36.983036    6005 out.go:177] * Deleting "bridge-570000" in qemu2 ...
	W0925 04:26:37.003329    6005 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 04:26:37.003358    6005 start.go:703] Will try again in 5 seconds ...
	I0925 04:26:42.005628    6005 start.go:365] acquiring machines lock for bridge-570000: {Name:mk918d99818e2cf21e5912bc291ed18d4b442ba6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 04:26:42.006047    6005 start.go:369] acquired machines lock for "bridge-570000" in 325.75µs
	I0925 04:26:42.006179    6005 start.go:93] Provisioning new machine with config: &{Name:bridge-570000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.2 ClusterName:bridge-570000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 04:26:42.006487    6005 start.go:125] createHost starting for "" (driver="qemu2")
	I0925 04:26:42.016220    6005 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0925 04:26:42.063124    6005 start.go:159] libmachine.API.Create for "bridge-570000" (driver="qemu2")
	I0925 04:26:42.063165    6005 client.go:168] LocalClient.Create starting
	I0925 04:26:42.063295    6005 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/ca.pem
	I0925 04:26:42.063369    6005 main.go:141] libmachine: Decoding PEM data...
	I0925 04:26:42.063386    6005 main.go:141] libmachine: Parsing certificate...
	I0925 04:26:42.063456    6005 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/cert.pem
	I0925 04:26:42.063492    6005 main.go:141] libmachine: Decoding PEM data...
	I0925 04:26:42.063504    6005 main.go:141] libmachine: Parsing certificate...
	I0925 04:26:42.064013    6005 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17297-1010/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I0925 04:26:42.193550    6005 main.go:141] libmachine: Creating SSH key...
	I0925 04:26:42.315170    6005 main.go:141] libmachine: Creating Disk image...
	I0925 04:26:42.315176    6005 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0925 04:26:42.315307    6005 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/bridge-570000/disk.qcow2.raw /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/bridge-570000/disk.qcow2
	I0925 04:26:42.323717    6005 main.go:141] libmachine: STDOUT: 
	I0925 04:26:42.323733    6005 main.go:141] libmachine: STDERR: 
	I0925 04:26:42.323787    6005 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/bridge-570000/disk.qcow2 +20000M
	I0925 04:26:42.330932    6005 main.go:141] libmachine: STDOUT: Image resized.
	
	I0925 04:26:42.330944    6005 main.go:141] libmachine: STDERR: 
	I0925 04:26:42.330958    6005 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/bridge-570000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/bridge-570000/disk.qcow2
	I0925 04:26:42.330969    6005 main.go:141] libmachine: Starting QEMU VM...
	I0925 04:26:42.331011    6005 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/bridge-570000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/bridge-570000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/bridge-570000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:5b:6f:0b:78:ed -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/bridge-570000/disk.qcow2
	I0925 04:26:42.332558    6005 main.go:141] libmachine: STDOUT: 
	I0925 04:26:42.332572    6005 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 04:26:42.332586    6005 client.go:171] LocalClient.Create took 269.416167ms
	I0925 04:26:44.334809    6005 start.go:128] duration metric: createHost completed in 2.328268917s
	I0925 04:26:44.334899    6005 start.go:83] releasing machines lock for "bridge-570000", held for 2.32882925s
	W0925 04:26:44.335386    6005 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p bridge-570000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-570000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 04:26:44.345087    6005 out.go:177] 
	W0925 04:26:44.349178    6005 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0925 04:26:44.349218    6005 out.go:239] * 
	* 
	W0925 04:26:44.351909    6005 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0925 04:26:44.360109    6005 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-570000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-570000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.789535958s)

                                                
                                                
-- stdout --
	* [kubenet-570000] minikube v1.31.2 on Darwin 13.6 (arm64)
	  - MINIKUBE_LOCATION=17297
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17297-1010/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17297-1010/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node kubenet-570000 in cluster kubenet-570000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-570000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 04:26:46.485060    6117 out.go:296] Setting OutFile to fd 1 ...
	I0925 04:26:46.485185    6117 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 04:26:46.485188    6117 out.go:309] Setting ErrFile to fd 2...
	I0925 04:26:46.485190    6117 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 04:26:46.485303    6117 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17297-1010/.minikube/bin
	I0925 04:26:46.486345    6117 out.go:303] Setting JSON to false
	I0925 04:26:46.501772    6117 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3381,"bootTime":1695637825,"procs":414,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.6","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0925 04:26:46.501852    6117 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0925 04:26:46.506901    6117 out.go:177] * [kubenet-570000] minikube v1.31.2 on Darwin 13.6 (arm64)
	I0925 04:26:46.513782    6117 out.go:177]   - MINIKUBE_LOCATION=17297
	I0925 04:26:46.513829    6117 notify.go:220] Checking for updates...
	I0925 04:26:46.517780    6117 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17297-1010/kubeconfig
	I0925 04:26:46.520731    6117 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0925 04:26:46.523795    6117 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 04:26:46.526844    6117 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17297-1010/.minikube
	I0925 04:26:46.529784    6117 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0925 04:26:46.533169    6117 config.go:182] Loaded profile config "multinode-352000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0925 04:26:46.533213    6117 driver.go:373] Setting default libvirt URI to qemu:///system
	I0925 04:26:46.537867    6117 out.go:177] * Using the qemu2 driver based on user configuration
	I0925 04:26:46.544717    6117 start.go:298] selected driver: qemu2
	I0925 04:26:46.544722    6117 start.go:902] validating driver "qemu2" against <nil>
	I0925 04:26:46.544727    6117 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0925 04:26:46.546697    6117 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0925 04:26:46.549881    6117 out.go:177] * Automatically selected the socket_vmnet network
	I0925 04:26:46.551439    6117 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0925 04:26:46.551464    6117 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0925 04:26:46.551471    6117 start_flags.go:321] config:
	{Name:kubenet-570000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:kubenet-570000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHA
gentPID:0 AutoPauseInterval:1m0s}
	I0925 04:26:46.555514    6117 iso.go:125] acquiring lock: {Name:mkf881a60cf9fd1672567914305ff6f7a4f13809 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 04:26:46.562781    6117 out.go:177] * Starting control plane node kubenet-570000 in cluster kubenet-570000
	I0925 04:26:46.566683    6117 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0925 04:26:46.566701    6117 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I0925 04:26:46.566716    6117 cache.go:57] Caching tarball of preloaded images
	I0925 04:26:46.566770    6117 preload.go:174] Found /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0925 04:26:46.566775    6117 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I0925 04:26:46.566854    6117 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/kubenet-570000/config.json ...
	I0925 04:26:46.566865    6117 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/kubenet-570000/config.json: {Name:mk56d68545d2d193b4a16665688cc6ae64b7c71a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 04:26:46.567059    6117 start.go:365] acquiring machines lock for kubenet-570000: {Name:mk918d99818e2cf21e5912bc291ed18d4b442ba6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 04:26:46.567087    6117 start.go:369] acquired machines lock for "kubenet-570000" in 22.209µs
	I0925 04:26:46.567095    6117 start.go:93] Provisioning new machine with config: &{Name:kubenet-570000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.2 ClusterName:kubenet-570000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 04:26:46.567126    6117 start.go:125] createHost starting for "" (driver="qemu2")
	I0925 04:26:46.575806    6117 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0925 04:26:46.591001    6117 start.go:159] libmachine.API.Create for "kubenet-570000" (driver="qemu2")
	I0925 04:26:46.591023    6117 client.go:168] LocalClient.Create starting
	I0925 04:26:46.591072    6117 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/ca.pem
	I0925 04:26:46.591096    6117 main.go:141] libmachine: Decoding PEM data...
	I0925 04:26:46.591105    6117 main.go:141] libmachine: Parsing certificate...
	I0925 04:26:46.591144    6117 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/cert.pem
	I0925 04:26:46.591163    6117 main.go:141] libmachine: Decoding PEM data...
	I0925 04:26:46.591169    6117 main.go:141] libmachine: Parsing certificate...
	I0925 04:26:46.591484    6117 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17297-1010/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I0925 04:26:46.705953    6117 main.go:141] libmachine: Creating SSH key...
	I0925 04:26:46.777331    6117 main.go:141] libmachine: Creating Disk image...
	I0925 04:26:46.777337    6117 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0925 04:26:46.777478    6117 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/kubenet-570000/disk.qcow2.raw /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/kubenet-570000/disk.qcow2
	I0925 04:26:46.786154    6117 main.go:141] libmachine: STDOUT: 
	I0925 04:26:46.786169    6117 main.go:141] libmachine: STDERR: 
	I0925 04:26:46.786232    6117 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/kubenet-570000/disk.qcow2 +20000M
	I0925 04:26:46.793430    6117 main.go:141] libmachine: STDOUT: Image resized.
	
	I0925 04:26:46.793446    6117 main.go:141] libmachine: STDERR: 
	I0925 04:26:46.793466    6117 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/kubenet-570000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/kubenet-570000/disk.qcow2
	I0925 04:26:46.793474    6117 main.go:141] libmachine: Starting QEMU VM...
	I0925 04:26:46.793518    6117 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/kubenet-570000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/kubenet-570000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/kubenet-570000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:f4:68:c9:00:8a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/kubenet-570000/disk.qcow2
	I0925 04:26:46.795051    6117 main.go:141] libmachine: STDOUT: 
	I0925 04:26:46.795063    6117 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 04:26:46.795082    6117 client.go:171] LocalClient.Create took 204.052583ms
	I0925 04:26:48.797298    6117 start.go:128] duration metric: createHost completed in 2.230137667s
	I0925 04:26:48.797373    6117 start.go:83] releasing machines lock for "kubenet-570000", held for 2.230276042s
	W0925 04:26:48.797455    6117 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 04:26:48.803994    6117 out.go:177] * Deleting "kubenet-570000" in qemu2 ...
	W0925 04:26:48.824538    6117 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 04:26:48.824562    6117 start.go:703] Will try again in 5 seconds ...
	I0925 04:26:53.826820    6117 start.go:365] acquiring machines lock for kubenet-570000: {Name:mk918d99818e2cf21e5912bc291ed18d4b442ba6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 04:26:53.827225    6117 start.go:369] acquired machines lock for "kubenet-570000" in 315.541µs
	I0925 04:26:53.827354    6117 start.go:93] Provisioning new machine with config: &{Name:kubenet-570000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.2 ClusterName:kubenet-570000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 04:26:53.827660    6117 start.go:125] createHost starting for "" (driver="qemu2")
	I0925 04:26:53.837099    6117 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0925 04:26:53.884661    6117 start.go:159] libmachine.API.Create for "kubenet-570000" (driver="qemu2")
	I0925 04:26:53.884695    6117 client.go:168] LocalClient.Create starting
	I0925 04:26:53.884812    6117 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/ca.pem
	I0925 04:26:53.884870    6117 main.go:141] libmachine: Decoding PEM data...
	I0925 04:26:53.884889    6117 main.go:141] libmachine: Parsing certificate...
	I0925 04:26:53.884953    6117 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/cert.pem
	I0925 04:26:53.884989    6117 main.go:141] libmachine: Decoding PEM data...
	I0925 04:26:53.885007    6117 main.go:141] libmachine: Parsing certificate...
	I0925 04:26:53.885449    6117 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17297-1010/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I0925 04:26:54.013841    6117 main.go:141] libmachine: Creating SSH key...
	I0925 04:26:54.189998    6117 main.go:141] libmachine: Creating Disk image...
	I0925 04:26:54.190005    6117 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0925 04:26:54.190166    6117 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/kubenet-570000/disk.qcow2.raw /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/kubenet-570000/disk.qcow2
	I0925 04:26:54.199306    6117 main.go:141] libmachine: STDOUT: 
	I0925 04:26:54.199323    6117 main.go:141] libmachine: STDERR: 
	I0925 04:26:54.199399    6117 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/kubenet-570000/disk.qcow2 +20000M
	I0925 04:26:54.206647    6117 main.go:141] libmachine: STDOUT: Image resized.
	
	I0925 04:26:54.206661    6117 main.go:141] libmachine: STDERR: 
	I0925 04:26:54.206676    6117 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/kubenet-570000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/kubenet-570000/disk.qcow2
	I0925 04:26:54.206685    6117 main.go:141] libmachine: Starting QEMU VM...
	I0925 04:26:54.206729    6117 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/kubenet-570000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/kubenet-570000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/kubenet-570000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:4a:c0:46:41:b0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/kubenet-570000/disk.qcow2
	I0925 04:26:54.208291    6117 main.go:141] libmachine: STDOUT: 
	I0925 04:26:54.208304    6117 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 04:26:54.208316    6117 client.go:171] LocalClient.Create took 323.615667ms
	I0925 04:26:56.210484    6117 start.go:128] duration metric: createHost completed in 2.382798792s
	I0925 04:26:56.210568    6117 start.go:83] releasing machines lock for "kubenet-570000", held for 2.383293708s
	W0925 04:26:56.210920    6117 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-570000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-570000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 04:26:56.218584    6117 out.go:177] 
	W0925 04:26:56.223612    6117 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0925 04:26:56.223644    6117 out.go:239] * 
	* 
	W0925 04:26:56.226492    6117 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0925 04:26:56.234583    6117 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.79s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (2.25s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.887949680.exe start -p stopped-upgrade-690000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:196: (dbg) Non-zero exit: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.887949680.exe start -p stopped-upgrade-690000 --memory=2200 --vm-driver=qemu2 : fork/exec /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.887949680.exe: permission denied (5.784541ms)
version_upgrade_test.go:196: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.887949680.exe start -p stopped-upgrade-690000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:196: (dbg) Non-zero exit: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.887949680.exe start -p stopped-upgrade-690000 --memory=2200 --vm-driver=qemu2 : fork/exec /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.887949680.exe: permission denied (5.188167ms)
E0925 04:26:49.911393    1469 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/client.crt: no such file or directory
version_upgrade_test.go:196: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.887949680.exe start -p stopped-upgrade-690000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:196: (dbg) Non-zero exit: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.887949680.exe start -p stopped-upgrade-690000 --memory=2200 --vm-driver=qemu2 : fork/exec /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.887949680.exe: permission denied (5.294167ms)
version_upgrade_test.go:202: legacy v1.6.2 start failed: fork/exec /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.887949680.exe: permission denied
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (2.25s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.12s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-690000
version_upgrade_test.go:219: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p stopped-upgrade-690000: exit status 85 (112.76825ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p enable-default-cni-570000                         | enable-default-cni-570000 | jenkins | v1.31.2 | 25 Sep 23 04:26 PDT |                     |
	|         | sudo systemctl status kubelet                        |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-570000                         | enable-default-cni-570000 | jenkins | v1.31.2 | 25 Sep 23 04:26 PDT |                     |
	|         | sudo systemctl cat kubelet                           |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-570000                         | enable-default-cni-570000 | jenkins | v1.31.2 | 25 Sep 23 04:26 PDT |                     |
	|         | sudo journalctl -xeu kubelet                         |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-570000                         | enable-default-cni-570000 | jenkins | v1.31.2 | 25 Sep 23 04:26 PDT |                     |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /etc/kubernetes/kubelet.conf                         |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-570000                         | enable-default-cni-570000 | jenkins | v1.31.2 | 25 Sep 23 04:26 PDT |                     |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /var/lib/kubelet/config.yaml                         |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-570000                         | enable-default-cni-570000 | jenkins | v1.31.2 | 25 Sep 23 04:26 PDT |                     |
	|         | sudo systemctl status docker                         |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-570000                         | enable-default-cni-570000 | jenkins | v1.31.2 | 25 Sep 23 04:26 PDT |                     |
	|         | sudo systemctl cat docker                            |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-570000                         | enable-default-cni-570000 | jenkins | v1.31.2 | 25 Sep 23 04:26 PDT |                     |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /etc/docker/daemon.json                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-570000                         | enable-default-cni-570000 | jenkins | v1.31.2 | 25 Sep 23 04:26 PDT |                     |
	|         | sudo docker system info                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-570000                         | enable-default-cni-570000 | jenkins | v1.31.2 | 25 Sep 23 04:26 PDT |                     |
	|         | sudo systemctl status                                |                           |         |         |                     |                     |
	|         | cri-docker --all --full                              |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-570000                         | enable-default-cni-570000 | jenkins | v1.31.2 | 25 Sep 23 04:26 PDT |                     |
	|         | sudo systemctl cat cri-docker                        |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-570000 sudo cat                | enable-default-cni-570000 | jenkins | v1.31.2 | 25 Sep 23 04:26 PDT |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-570000 sudo cat                | enable-default-cni-570000 | jenkins | v1.31.2 | 25 Sep 23 04:26 PDT |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-570000                         | enable-default-cni-570000 | jenkins | v1.31.2 | 25 Sep 23 04:26 PDT |                     |
	|         | sudo cri-dockerd --version                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-570000                         | enable-default-cni-570000 | jenkins | v1.31.2 | 25 Sep 23 04:26 PDT |                     |
	|         | sudo systemctl status                                |                           |         |         |                     |                     |
	|         | containerd --all --full                              |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-570000                         | enable-default-cni-570000 | jenkins | v1.31.2 | 25 Sep 23 04:26 PDT |                     |
	|         | sudo systemctl cat containerd                        |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-570000 sudo cat                | enable-default-cni-570000 | jenkins | v1.31.2 | 25 Sep 23 04:26 PDT |                     |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-570000                         | enable-default-cni-570000 | jenkins | v1.31.2 | 25 Sep 23 04:26 PDT |                     |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-570000                         | enable-default-cni-570000 | jenkins | v1.31.2 | 25 Sep 23 04:26 PDT |                     |
	|         | sudo containerd config dump                          |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-570000                         | enable-default-cni-570000 | jenkins | v1.31.2 | 25 Sep 23 04:26 PDT |                     |
	|         | sudo systemctl status crio                           |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-570000                         | enable-default-cni-570000 | jenkins | v1.31.2 | 25 Sep 23 04:26 PDT |                     |
	|         | sudo systemctl cat crio                              |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-570000                         | enable-default-cni-570000 | jenkins | v1.31.2 | 25 Sep 23 04:26 PDT |                     |
	|         | sudo find /etc/crio -type f                          |                           |         |         |                     |                     |
	|         | -exec sh -c 'echo {}; cat {}'                        |                           |         |         |                     |                     |
	|         | \;                                                   |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-570000                         | enable-default-cni-570000 | jenkins | v1.31.2 | 25 Sep 23 04:26 PDT |                     |
	|         | sudo crio config                                     |                           |         |         |                     |                     |
	| delete  | -p enable-default-cni-570000                         | enable-default-cni-570000 | jenkins | v1.31.2 | 25 Sep 23 04:26 PDT | 25 Sep 23 04:26 PDT |
	| start   | -p bridge-570000 --memory=3072                       | bridge-570000             | jenkins | v1.31.2 | 25 Sep 23 04:26 PDT |                     |
	|         | --alsologtostderr --wait=true                        |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                           |         |         |                     |                     |
	|         | --cni=bridge --driver=qemu2                          |                           |         |         |                     |                     |
	| ssh     | -p bridge-570000 sudo cat                            | bridge-570000             | jenkins | v1.31.2 | 25 Sep 23 04:26 PDT |                     |
	|         | /etc/nsswitch.conf                                   |                           |         |         |                     |                     |
	| ssh     | -p bridge-570000 sudo cat                            | bridge-570000             | jenkins | v1.31.2 | 25 Sep 23 04:26 PDT |                     |
	|         | /etc/hosts                                           |                           |         |         |                     |                     |
	| ssh     | -p bridge-570000 sudo cat                            | bridge-570000             | jenkins | v1.31.2 | 25 Sep 23 04:26 PDT |                     |
	|         | /etc/resolv.conf                                     |                           |         |         |                     |                     |
	| ssh     | -p bridge-570000 sudo crictl                         | bridge-570000             | jenkins | v1.31.2 | 25 Sep 23 04:26 PDT |                     |
	|         | pods                                                 |                           |         |         |                     |                     |
	| ssh     | -p bridge-570000 sudo crictl                         | bridge-570000             | jenkins | v1.31.2 | 25 Sep 23 04:26 PDT |                     |
	|         | ps --all                                             |                           |         |         |                     |                     |
	| ssh     | -p bridge-570000 sudo find                           | bridge-570000             | jenkins | v1.31.2 | 25 Sep 23 04:26 PDT |                     |
	|         | /etc/cni -type f -exec sh -c                         |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p bridge-570000 sudo ip a s                         | bridge-570000             | jenkins | v1.31.2 | 25 Sep 23 04:26 PDT |                     |
	| ssh     | -p bridge-570000 sudo ip r s                         | bridge-570000             | jenkins | v1.31.2 | 25 Sep 23 04:26 PDT |                     |
	| ssh     | -p bridge-570000 sudo                                | bridge-570000             | jenkins | v1.31.2 | 25 Sep 23 04:26 PDT |                     |
	|         | iptables-save                                        |                           |         |         |                     |                     |
	| ssh     | -p bridge-570000 sudo iptables                       | bridge-570000             | jenkins | v1.31.2 | 25 Sep 23 04:26 PDT |                     |
	|         | -t nat -L -n -v                                      |                           |         |         |                     |                     |
	| ssh     | -p bridge-570000 sudo                                | bridge-570000             | jenkins | v1.31.2 | 25 Sep 23 04:26 PDT |                     |
	|         | systemctl status kubelet --all                       |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p bridge-570000 sudo                                | bridge-570000             | jenkins | v1.31.2 | 25 Sep 23 04:26 PDT |                     |
	|         | systemctl cat kubelet                                |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p bridge-570000 sudo                                | bridge-570000             | jenkins | v1.31.2 | 25 Sep 23 04:26 PDT |                     |
	|         | journalctl -xeu kubelet --all                        |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p bridge-570000 sudo cat                            | bridge-570000             | jenkins | v1.31.2 | 25 Sep 23 04:26 PDT |                     |
	|         | /etc/kubernetes/kubelet.conf                         |                           |         |         |                     |                     |
	| ssh     | -p bridge-570000 sudo cat                            | bridge-570000             | jenkins | v1.31.2 | 25 Sep 23 04:26 PDT |                     |
	|         | /var/lib/kubelet/config.yaml                         |                           |         |         |                     |                     |
	| ssh     | -p bridge-570000 sudo                                | bridge-570000             | jenkins | v1.31.2 | 25 Sep 23 04:26 PDT |                     |
	|         | systemctl status docker --all                        |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p bridge-570000 sudo                                | bridge-570000             | jenkins | v1.31.2 | 25 Sep 23 04:26 PDT |                     |
	|         | systemctl cat docker                                 |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p bridge-570000 sudo cat                            | bridge-570000             | jenkins | v1.31.2 | 25 Sep 23 04:26 PDT |                     |
	|         | /etc/docker/daemon.json                              |                           |         |         |                     |                     |
	| ssh     | -p bridge-570000 sudo docker                         | bridge-570000             | jenkins | v1.31.2 | 25 Sep 23 04:26 PDT |                     |
	|         | system info                                          |                           |         |         |                     |                     |
	| ssh     | -p bridge-570000 sudo                                | bridge-570000             | jenkins | v1.31.2 | 25 Sep 23 04:26 PDT |                     |
	|         | systemctl status cri-docker                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p bridge-570000 sudo                                | bridge-570000             | jenkins | v1.31.2 | 25 Sep 23 04:26 PDT |                     |
	|         | systemctl cat cri-docker                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p bridge-570000 sudo cat                            | bridge-570000             | jenkins | v1.31.2 | 25 Sep 23 04:26 PDT |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p bridge-570000 sudo cat                            | bridge-570000             | jenkins | v1.31.2 | 25 Sep 23 04:26 PDT |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p bridge-570000 sudo                                | bridge-570000             | jenkins | v1.31.2 | 25 Sep 23 04:26 PDT |                     |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p bridge-570000 sudo                                | bridge-570000             | jenkins | v1.31.2 | 25 Sep 23 04:26 PDT |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p bridge-570000 sudo                                | bridge-570000             | jenkins | v1.31.2 | 25 Sep 23 04:26 PDT |                     |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p bridge-570000 sudo cat                            | bridge-570000             | jenkins | v1.31.2 | 25 Sep 23 04:26 PDT |                     |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p bridge-570000 sudo cat                            | bridge-570000             | jenkins | v1.31.2 | 25 Sep 23 04:26 PDT |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p bridge-570000 sudo                                | bridge-570000             | jenkins | v1.31.2 | 25 Sep 23 04:26 PDT |                     |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p bridge-570000 sudo                                | bridge-570000             | jenkins | v1.31.2 | 25 Sep 23 04:26 PDT |                     |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p bridge-570000 sudo                                | bridge-570000             | jenkins | v1.31.2 | 25 Sep 23 04:26 PDT |                     |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p bridge-570000 sudo find                           | bridge-570000             | jenkins | v1.31.2 | 25 Sep 23 04:26 PDT |                     |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p bridge-570000 sudo crio                           | bridge-570000             | jenkins | v1.31.2 | 25 Sep 23 04:26 PDT |                     |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p bridge-570000                                     | bridge-570000             | jenkins | v1.31.2 | 25 Sep 23 04:26 PDT | 25 Sep 23 04:26 PDT |
	| start   | -p kubenet-570000                                    | kubenet-570000            | jenkins | v1.31.2 | 25 Sep 23 04:26 PDT |                     |
	|         | --memory=3072                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                           |         |         |                     |                     |
	|         | --network-plugin=kubenet                             |                           |         |         |                     |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/25 04:26:46
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.21.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0925 04:26:46.485060    6117 out.go:296] Setting OutFile to fd 1 ...
	I0925 04:26:46.485185    6117 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 04:26:46.485188    6117 out.go:309] Setting ErrFile to fd 2...
	I0925 04:26:46.485190    6117 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 04:26:46.485303    6117 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17297-1010/.minikube/bin
	I0925 04:26:46.486345    6117 out.go:303] Setting JSON to false
	I0925 04:26:46.501772    6117 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3381,"bootTime":1695637825,"procs":414,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.6","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0925 04:26:46.501852    6117 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0925 04:26:46.506901    6117 out.go:177] * [kubenet-570000] minikube v1.31.2 on Darwin 13.6 (arm64)
	I0925 04:26:46.513782    6117 out.go:177]   - MINIKUBE_LOCATION=17297
	I0925 04:26:46.513829    6117 notify.go:220] Checking for updates...
	I0925 04:26:46.517780    6117 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17297-1010/kubeconfig
	I0925 04:26:46.520731    6117 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0925 04:26:46.523795    6117 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 04:26:46.526844    6117 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17297-1010/.minikube
	I0925 04:26:46.529784    6117 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0925 04:26:46.533169    6117 config.go:182] Loaded profile config "multinode-352000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0925 04:26:46.533213    6117 driver.go:373] Setting default libvirt URI to qemu:///system
	I0925 04:26:46.537867    6117 out.go:177] * Using the qemu2 driver based on user configuration
	I0925 04:26:46.544717    6117 start.go:298] selected driver: qemu2
	I0925 04:26:46.544722    6117 start.go:902] validating driver "qemu2" against <nil>
	I0925 04:26:46.544727    6117 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0925 04:26:46.546697    6117 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0925 04:26:46.549881    6117 out.go:177] * Automatically selected the socket_vmnet network
	I0925 04:26:46.551439    6117 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0925 04:26:46.551464    6117 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0925 04:26:46.551471    6117 start_flags.go:321] config:
	{Name:kubenet-570000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:kubenet-570000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHA
gentPID:0 AutoPauseInterval:1m0s}
	I0925 04:26:46.555514    6117 iso.go:125] acquiring lock: {Name:mkf881a60cf9fd1672567914305ff6f7a4f13809 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 04:26:46.562781    6117 out.go:177] * Starting control plane node kubenet-570000 in cluster kubenet-570000
	I0925 04:26:46.566683    6117 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0925 04:26:46.566701    6117 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I0925 04:26:46.566716    6117 cache.go:57] Caching tarball of preloaded images
	I0925 04:26:46.566770    6117 preload.go:174] Found /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0925 04:26:46.566775    6117 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I0925 04:26:46.566854    6117 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/kubenet-570000/config.json ...
	I0925 04:26:46.566865    6117 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/kubenet-570000/config.json: {Name:mk56d68545d2d193b4a16665688cc6ae64b7c71a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 04:26:46.567059    6117 start.go:365] acquiring machines lock for kubenet-570000: {Name:mk918d99818e2cf21e5912bc291ed18d4b442ba6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 04:26:46.567087    6117 start.go:369] acquired machines lock for "kubenet-570000" in 22.209µs
	I0925 04:26:46.567095    6117 start.go:93] Provisioning new machine with config: &{Name:kubenet-570000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.2 ClusterName:kubenet-570000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 04:26:46.567126    6117 start.go:125] createHost starting for "" (driver="qemu2")
	I0925 04:26:46.575806    6117 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0925 04:26:46.591001    6117 start.go:159] libmachine.API.Create for "kubenet-570000" (driver="qemu2")
	I0925 04:26:46.591023    6117 client.go:168] LocalClient.Create starting
	I0925 04:26:46.591072    6117 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/ca.pem
	I0925 04:26:46.591096    6117 main.go:141] libmachine: Decoding PEM data...
	I0925 04:26:46.591105    6117 main.go:141] libmachine: Parsing certificate...
	I0925 04:26:46.591144    6117 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/cert.pem
	I0925 04:26:46.591163    6117 main.go:141] libmachine: Decoding PEM data...
	I0925 04:26:46.591169    6117 main.go:141] libmachine: Parsing certificate...
	I0925 04:26:46.591484    6117 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17297-1010/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I0925 04:26:46.705953    6117 main.go:141] libmachine: Creating SSH key...
	I0925 04:26:46.777331    6117 main.go:141] libmachine: Creating Disk image...
	I0925 04:26:46.777337    6117 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0925 04:26:46.777478    6117 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/kubenet-570000/disk.qcow2.raw /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/kubenet-570000/disk.qcow2
	I0925 04:26:46.786154    6117 main.go:141] libmachine: STDOUT: 
	I0925 04:26:46.786169    6117 main.go:141] libmachine: STDERR: 
	I0925 04:26:46.786232    6117 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/kubenet-570000/disk.qcow2 +20000M
	I0925 04:26:46.793430    6117 main.go:141] libmachine: STDOUT: Image resized.
	
	I0925 04:26:46.793446    6117 main.go:141] libmachine: STDERR: 
	I0925 04:26:46.793466    6117 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/kubenet-570000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/kubenet-570000/disk.qcow2
	I0925 04:26:46.793474    6117 main.go:141] libmachine: Starting QEMU VM...
	I0925 04:26:46.793518    6117 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/kubenet-570000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/kubenet-570000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/kubenet-570000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:f4:68:c9:00:8a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/kubenet-570000/disk.qcow2
	I0925 04:26:46.795051    6117 main.go:141] libmachine: STDOUT: 
	I0925 04:26:46.795063    6117 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 04:26:46.795082    6117 client.go:171] LocalClient.Create took 204.052583ms
	I0925 04:26:48.797298    6117 start.go:128] duration metric: createHost completed in 2.230137667s
	I0925 04:26:48.797373    6117 start.go:83] releasing machines lock for "kubenet-570000", held for 2.230276042s
	W0925 04:26:48.797455    6117 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 04:26:48.803994    6117 out.go:177] * Deleting "kubenet-570000" in qemu2 ...
	
	* 
	* Profile "stopped-upgrade-690000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p stopped-upgrade-690000"

                                                
                                                
-- /stdout --
version_upgrade_test.go:221: `minikube logs` after upgrade to HEAD from v1.6.2 failed: exit status 85
--- FAIL: TestStoppedBinaryUpgrade/MinikubeLogs (0.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (10.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-925000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-925000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0: exit status 80 (9.984889667s)

                                                
                                                
-- stdout --
	* [old-k8s-version-925000] minikube v1.31.2 on Darwin 13.6 (arm64)
	  - MINIKUBE_LOCATION=17297
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17297-1010/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17297-1010/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node old-k8s-version-925000 in cluster old-k8s-version-925000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-925000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 04:26:50.632100    6148 out.go:296] Setting OutFile to fd 1 ...
	I0925 04:26:50.632247    6148 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 04:26:50.632250    6148 out.go:309] Setting ErrFile to fd 2...
	I0925 04:26:50.632253    6148 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 04:26:50.632382    6148 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17297-1010/.minikube/bin
	I0925 04:26:50.633431    6148 out.go:303] Setting JSON to false
	I0925 04:26:50.648687    6148 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3385,"bootTime":1695637825,"procs":417,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.6","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0925 04:26:50.648774    6148 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0925 04:26:50.653053    6148 out.go:177] * [old-k8s-version-925000] minikube v1.31.2 on Darwin 13.6 (arm64)
	I0925 04:26:50.660031    6148 out.go:177]   - MINIKUBE_LOCATION=17297
	I0925 04:26:50.660109    6148 notify.go:220] Checking for updates...
	I0925 04:26:50.663961    6148 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17297-1010/kubeconfig
	I0925 04:26:50.667003    6148 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0925 04:26:50.670019    6148 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 04:26:50.672985    6148 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17297-1010/.minikube
	I0925 04:26:50.676001    6148 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0925 04:26:50.679281    6148 config.go:182] Loaded profile config "kubenet-570000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0925 04:26:50.679340    6148 config.go:182] Loaded profile config "multinode-352000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0925 04:26:50.679394    6148 driver.go:373] Setting default libvirt URI to qemu:///system
	I0925 04:26:50.684047    6148 out.go:177] * Using the qemu2 driver based on user configuration
	I0925 04:26:50.690837    6148 start.go:298] selected driver: qemu2
	I0925 04:26:50.690844    6148 start.go:902] validating driver "qemu2" against <nil>
	I0925 04:26:50.690850    6148 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0925 04:26:50.692881    6148 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0925 04:26:50.695948    6148 out.go:177] * Automatically selected the socket_vmnet network
	I0925 04:26:50.699100    6148 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0925 04:26:50.699128    6148 cni.go:84] Creating CNI manager for ""
	I0925 04:26:50.699142    6148 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0925 04:26:50.699149    6148 start_flags.go:321] config:
	{Name:old-k8s-version-925000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-925000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSoc
k: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0925 04:26:50.703307    6148 iso.go:125] acquiring lock: {Name:mkf881a60cf9fd1672567914305ff6f7a4f13809 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 04:26:50.709911    6148 out.go:177] * Starting control plane node old-k8s-version-925000 in cluster old-k8s-version-925000
	I0925 04:26:50.713769    6148 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0925 04:26:50.713789    6148 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0925 04:26:50.713800    6148 cache.go:57] Caching tarball of preloaded images
	I0925 04:26:50.713861    6148 preload.go:174] Found /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0925 04:26:50.713867    6148 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0925 04:26:50.713927    6148 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/old-k8s-version-925000/config.json ...
	I0925 04:26:50.713939    6148 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/old-k8s-version-925000/config.json: {Name:mk0cab69e15e9a352bb1ee78af678dcce7d69763 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 04:26:50.714137    6148 start.go:365] acquiring machines lock for old-k8s-version-925000: {Name:mk918d99818e2cf21e5912bc291ed18d4b442ba6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 04:26:50.714166    6148 start.go:369] acquired machines lock for "old-k8s-version-925000" in 23.334µs
	I0925 04:26:50.714175    6148 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-925000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-925000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 04:26:50.714218    6148 start.go:125] createHost starting for "" (driver="qemu2")
	I0925 04:26:50.722820    6148 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0925 04:26:50.738949    6148 start.go:159] libmachine.API.Create for "old-k8s-version-925000" (driver="qemu2")
	I0925 04:26:50.738975    6148 client.go:168] LocalClient.Create starting
	I0925 04:26:50.739038    6148 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/ca.pem
	I0925 04:26:50.739064    6148 main.go:141] libmachine: Decoding PEM data...
	I0925 04:26:50.739075    6148 main.go:141] libmachine: Parsing certificate...
	I0925 04:26:50.739117    6148 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/cert.pem
	I0925 04:26:50.739138    6148 main.go:141] libmachine: Decoding PEM data...
	I0925 04:26:50.739146    6148 main.go:141] libmachine: Parsing certificate...
	I0925 04:26:50.739492    6148 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17297-1010/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I0925 04:26:50.852918    6148 main.go:141] libmachine: Creating SSH key...
	I0925 04:26:51.119485    6148 main.go:141] libmachine: Creating Disk image...
	I0925 04:26:51.119494    6148 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0925 04:26:51.119675    6148 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/old-k8s-version-925000/disk.qcow2.raw /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/old-k8s-version-925000/disk.qcow2
	I0925 04:26:51.128879    6148 main.go:141] libmachine: STDOUT: 
	I0925 04:26:51.128895    6148 main.go:141] libmachine: STDERR: 
	I0925 04:26:51.128941    6148 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/old-k8s-version-925000/disk.qcow2 +20000M
	I0925 04:26:51.136263    6148 main.go:141] libmachine: STDOUT: Image resized.
	
	I0925 04:26:51.136281    6148 main.go:141] libmachine: STDERR: 
	I0925 04:26:51.136301    6148 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/old-k8s-version-925000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/old-k8s-version-925000/disk.qcow2
	I0925 04:26:51.136306    6148 main.go:141] libmachine: Starting QEMU VM...
	I0925 04:26:51.136348    6148 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/old-k8s-version-925000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/old-k8s-version-925000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/old-k8s-version-925000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:99:d2:d5:b0:1d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/old-k8s-version-925000/disk.qcow2
	I0925 04:26:51.137914    6148 main.go:141] libmachine: STDOUT: 
	I0925 04:26:51.137927    6148 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 04:26:51.137952    6148 client.go:171] LocalClient.Create took 398.968042ms
	I0925 04:26:53.140129    6148 start.go:128] duration metric: createHost completed in 2.425890375s
	I0925 04:26:53.140192    6148 start.go:83] releasing machines lock for "old-k8s-version-925000", held for 2.426014833s
	W0925 04:26:53.140242    6148 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 04:26:53.147938    6148 out.go:177] * Deleting "old-k8s-version-925000" in qemu2 ...
	W0925 04:26:53.171258    6148 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 04:26:53.171288    6148 start.go:703] Will try again in 5 seconds ...
	I0925 04:26:58.173437    6148 start.go:365] acquiring machines lock for old-k8s-version-925000: {Name:mk918d99818e2cf21e5912bc291ed18d4b442ba6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 04:26:58.173542    6148 start.go:369] acquired machines lock for "old-k8s-version-925000" in 72.25µs
	I0925 04:26:58.173560    6148 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-925000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-925000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 04:26:58.173605    6148 start.go:125] createHost starting for "" (driver="qemu2")
	I0925 04:26:58.180712    6148 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0925 04:26:58.195100    6148 start.go:159] libmachine.API.Create for "old-k8s-version-925000" (driver="qemu2")
	I0925 04:26:58.195126    6148 client.go:168] LocalClient.Create starting
	I0925 04:26:58.195184    6148 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/ca.pem
	I0925 04:26:58.195209    6148 main.go:141] libmachine: Decoding PEM data...
	I0925 04:26:58.195219    6148 main.go:141] libmachine: Parsing certificate...
	I0925 04:26:58.195255    6148 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/cert.pem
	I0925 04:26:58.195269    6148 main.go:141] libmachine: Decoding PEM data...
	I0925 04:26:58.195275    6148 main.go:141] libmachine: Parsing certificate...
	I0925 04:26:58.195528    6148 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17297-1010/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I0925 04:26:58.357154    6148 main.go:141] libmachine: Creating SSH key...
	I0925 04:26:58.523690    6148 main.go:141] libmachine: Creating Disk image...
	I0925 04:26:58.523702    6148 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0925 04:26:58.523875    6148 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/old-k8s-version-925000/disk.qcow2.raw /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/old-k8s-version-925000/disk.qcow2
	I0925 04:26:58.532881    6148 main.go:141] libmachine: STDOUT: 
	I0925 04:26:58.532913    6148 main.go:141] libmachine: STDERR: 
	I0925 04:26:58.532987    6148 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/old-k8s-version-925000/disk.qcow2 +20000M
	I0925 04:26:58.541109    6148 main.go:141] libmachine: STDOUT: Image resized.
	
	I0925 04:26:58.541141    6148 main.go:141] libmachine: STDERR: 
	I0925 04:26:58.541162    6148 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/old-k8s-version-925000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/old-k8s-version-925000/disk.qcow2
	I0925 04:26:58.541172    6148 main.go:141] libmachine: Starting QEMU VM...
	I0925 04:26:58.541231    6148 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/old-k8s-version-925000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/old-k8s-version-925000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/old-k8s-version-925000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:2d:2c:bf:cd:ff -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/old-k8s-version-925000/disk.qcow2
	I0925 04:26:58.543367    6148 main.go:141] libmachine: STDOUT: 
	I0925 04:26:58.543399    6148 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 04:26:58.543414    6148 client.go:171] LocalClient.Create took 348.283375ms
	I0925 04:27:00.545703    6148 start.go:128] duration metric: createHost completed in 2.371982125s
	I0925 04:27:00.545767    6148 start.go:83] releasing machines lock for "old-k8s-version-925000", held for 2.37221375s
	W0925 04:27:00.546115    6148 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-925000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-925000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 04:27:00.563532    6148 out.go:177] 
	W0925 04:27:00.569074    6148 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0925 04:27:00.569100    6148 out.go:239] * 
	* 
	W0925 04:27:00.571143    6148 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0925 04:27:00.581751    6148 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-925000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-925000 -n old-k8s-version-925000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-925000 -n old-k8s-version-925000: exit status 7 (47.838625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-925000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (10.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (12.05s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-583000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.28.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-583000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.28.2: exit status 80 (11.995004792s)

                                                
                                                
-- stdout --
	* [no-preload-583000] minikube v1.31.2 on Darwin 13.6 (arm64)
	  - MINIKUBE_LOCATION=17297
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17297-1010/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17297-1010/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node no-preload-583000 in cluster no-preload-583000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-583000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 04:26:58.346282    6260 out.go:296] Setting OutFile to fd 1 ...
	I0925 04:26:58.346422    6260 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 04:26:58.346425    6260 out.go:309] Setting ErrFile to fd 2...
	I0925 04:26:58.346428    6260 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 04:26:58.346559    6260 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17297-1010/.minikube/bin
	I0925 04:26:58.347913    6260 out.go:303] Setting JSON to false
	I0925 04:26:58.365979    6260 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3393,"bootTime":1695637825,"procs":418,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.6","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0925 04:26:58.366067    6260 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0925 04:26:58.371602    6260 out.go:177] * [no-preload-583000] minikube v1.31.2 on Darwin 13.6 (arm64)
	I0925 04:26:58.379756    6260 out.go:177]   - MINIKUBE_LOCATION=17297
	I0925 04:26:58.383781    6260 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17297-1010/kubeconfig
	I0925 04:26:58.379862    6260 notify.go:220] Checking for updates...
	I0925 04:26:58.389812    6260 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0925 04:26:58.394707    6260 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 04:26:58.401818    6260 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17297-1010/.minikube
	I0925 04:26:58.405752    6260 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0925 04:26:58.409140    6260 config.go:182] Loaded profile config "multinode-352000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0925 04:26:58.409204    6260 config.go:182] Loaded profile config "old-k8s-version-925000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0925 04:26:58.409244    6260 driver.go:373] Setting default libvirt URI to qemu:///system
	I0925 04:26:58.412672    6260 out.go:177] * Using the qemu2 driver based on user configuration
	I0925 04:26:58.418788    6260 start.go:298] selected driver: qemu2
	I0925 04:26:58.418793    6260 start.go:902] validating driver "qemu2" against <nil>
	I0925 04:26:58.418799    6260 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0925 04:26:58.420779    6260 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0925 04:26:58.421983    6260 out.go:177] * Automatically selected the socket_vmnet network
	I0925 04:26:58.424782    6260 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0925 04:26:58.424804    6260 cni.go:84] Creating CNI manager for ""
	I0925 04:26:58.424813    6260 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0925 04:26:58.424817    6260 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0925 04:26:58.424822    6260 start_flags.go:321] config:
	{Name:no-preload-583000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:no-preload-583000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:d
ocker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSH
AgentPID:0 AutoPauseInterval:1m0s}
	I0925 04:26:58.428722    6260 iso.go:125] acquiring lock: {Name:mkf881a60cf9fd1672567914305ff6f7a4f13809 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 04:26:58.435766    6260 out.go:177] * Starting control plane node no-preload-583000 in cluster no-preload-583000
	I0925 04:26:58.439705    6260 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0925 04:26:58.439774    6260 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/no-preload-583000/config.json ...
	I0925 04:26:58.439789    6260 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/no-preload-583000/config.json: {Name:mk9a7b180ccd66785cf564274bc9d011c7d39ece Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 04:26:58.439797    6260 cache.go:107] acquiring lock: {Name:mkabf7fabdeaff7e666ac8f9deef5b56be85207e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 04:26:58.439797    6260 cache.go:107] acquiring lock: {Name:mk8c56e5ef32d4432d084bb256d3f7f9e4101f20 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 04:26:58.439817    6260 cache.go:107] acquiring lock: {Name:mk713969746b190f5004b9db04e7550ca40ba84a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 04:26:58.439920    6260 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.28.2
	I0925 04:26:58.439929    6260 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.28.2
	I0925 04:26:58.439950    6260 cache.go:107] acquiring lock: {Name:mk23237034098747647e04602b84b61998414bf4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 04:26:58.439898    6260 cache.go:107] acquiring lock: {Name:mk0b2600468dfd003d8ca8312eece64600df4ad2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 04:26:58.439987    6260 start.go:365] acquiring machines lock for no-preload-583000: {Name:mk918d99818e2cf21e5912bc291ed18d4b442ba6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 04:26:58.439998    6260 cache.go:107] acquiring lock: {Name:mk0f31a6ee309e91a5a1fb87075eb50919e7f68f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 04:26:58.440020    6260 cache.go:107] acquiring lock: {Name:mk118c23adcb7020b052a077ebb1b9e32a9e6d36 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 04:26:58.440039    6260 cache.go:115] /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0925 04:26:58.440050    6260 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/17297-1010/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 253.75µs
	I0925 04:26:58.440062    6260 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0925 04:26:58.440078    6260 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.10.1
	I0925 04:26:58.440096    6260 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.28.2
	I0925 04:26:58.440098    6260 cache.go:107] acquiring lock: {Name:mkc25bb3873a1a8dccb3626dd27ed375863a57f7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 04:26:58.440124    6260 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0925 04:26:58.440196    6260 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.9-0
	I0925 04:26:58.440222    6260 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.28.2
	I0925 04:26:58.446303    6260 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.10.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.10.1
	I0925 04:26:58.446423    6260 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.28.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.28.2
	I0925 04:26:58.446453    6260 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.28.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.28.2
	I0925 04:26:58.446942    6260 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.9-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.9-0
	I0925 04:26:58.446972    6260 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.28.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.28.2
	I0925 04:26:58.446959    6260 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0925 04:26:58.447102    6260 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.28.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.28.2
	I0925 04:26:59.053806    6260 cache.go:162] opening:  /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1
	I0925 04:26:59.072706    6260 cache.go:162] opening:  /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.2
	I0925 04:26:59.287941    6260 cache.go:162] opening:  /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.2
	I0925 04:26:59.467031    6260 cache.go:162] opening:  /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.9-0
	I0925 04:26:59.646694    6260 cache.go:162] opening:  /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.2
	I0925 04:26:59.854593    6260 cache.go:162] opening:  /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9
	I0925 04:26:59.980565    6260 cache.go:157] /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0925 04:26:59.980578    6260 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/17297-1010/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 1.540733s
	I0925 04:26:59.980586    6260 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0925 04:27:00.059566    6260 cache.go:162] opening:  /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.2
	I0925 04:27:00.545930    6260 start.go:369] acquired machines lock for "no-preload-583000" in 2.105908125s
	I0925 04:27:00.546137    6260 start.go:93] Provisioning new machine with config: &{Name:no-preload-583000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.2 ClusterName:no-preload-583000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 04:27:00.546359    6260 start.go:125] createHost starting for "" (driver="qemu2")
	I0925 04:27:00.558716    6260 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0925 04:27:00.607870    6260 start.go:159] libmachine.API.Create for "no-preload-583000" (driver="qemu2")
	I0925 04:27:00.607903    6260 client.go:168] LocalClient.Create starting
	I0925 04:27:00.608023    6260 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/ca.pem
	I0925 04:27:00.608076    6260 main.go:141] libmachine: Decoding PEM data...
	I0925 04:27:00.608099    6260 main.go:141] libmachine: Parsing certificate...
	I0925 04:27:00.608164    6260 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/cert.pem
	I0925 04:27:00.608201    6260 main.go:141] libmachine: Decoding PEM data...
	I0925 04:27:00.608215    6260 main.go:141] libmachine: Parsing certificate...
	I0925 04:27:00.608764    6260 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17297-1010/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I0925 04:27:00.741762    6260 main.go:141] libmachine: Creating SSH key...
	I0925 04:27:00.856084    6260 main.go:141] libmachine: Creating Disk image...
	I0925 04:27:00.856098    6260 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0925 04:27:00.856268    6260 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/no-preload-583000/disk.qcow2.raw /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/no-preload-583000/disk.qcow2
	I0925 04:27:00.865523    6260 main.go:141] libmachine: STDOUT: 
	I0925 04:27:00.865558    6260 main.go:141] libmachine: STDERR: 
	I0925 04:27:00.865611    6260 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/no-preload-583000/disk.qcow2 +20000M
	I0925 04:27:00.880900    6260 main.go:141] libmachine: STDOUT: Image resized.
	
	I0925 04:27:00.880924    6260 main.go:141] libmachine: STDERR: 
	I0925 04:27:00.880942    6260 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/no-preload-583000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/no-preload-583000/disk.qcow2
	I0925 04:27:00.880949    6260 main.go:141] libmachine: Starting QEMU VM...
	I0925 04:27:00.880987    6260 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/no-preload-583000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/no-preload-583000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/no-preload-583000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:9d:c6:6b:68:21 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/no-preload-583000/disk.qcow2
	I0925 04:27:00.882737    6260 main.go:141] libmachine: STDOUT: 
	I0925 04:27:00.882753    6260 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 04:27:00.882774    6260 client.go:171] LocalClient.Create took 274.864667ms
	I0925 04:27:01.018358    6260 cache.go:157] /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.2 exists
	I0925 04:27:01.018371    6260 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.28.2" -> "/Users/jenkins/minikube-integration/17297-1010/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.2" took 2.578371167s
	I0925 04:27:01.018386    6260 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.28.2 -> /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.2 succeeded
	I0925 04:27:01.297872    6260 cache.go:157] /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 exists
	I0925 04:27:01.297912    6260 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.10.1" -> "/Users/jenkins/minikube-integration/17297-1010/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1" took 2.857982625s
	I0925 04:27:01.297935    6260 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.10.1 -> /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 succeeded
	I0925 04:27:02.882916    6260 start.go:128] duration metric: createHost completed in 2.336535s
	I0925 04:27:02.882950    6260 start.go:83] releasing machines lock for "no-preload-583000", held for 2.336929875s
	W0925 04:27:02.882983    6260 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 04:27:02.895778    6260 out.go:177] * Deleting "no-preload-583000" in qemu2 ...
	W0925 04:27:02.912672    6260 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 04:27:02.912684    6260 start.go:703] Will try again in 5 seconds ...
	I0925 04:27:03.231737    6260 cache.go:157] /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.2 exists
	I0925 04:27:03.231789    6260 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.28.2" -> "/Users/jenkins/minikube-integration/17297-1010/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.2" took 4.791771583s
	I0925 04:27:03.231820    6260 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.28.2 -> /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.2 succeeded
	I0925 04:27:03.443493    6260 cache.go:157] /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.2 exists
	I0925 04:27:03.443541    6260 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.28.2" -> "/Users/jenkins/minikube-integration/17297-1010/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.2" took 5.003734167s
	I0925 04:27:03.443610    6260 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.28.2 -> /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.2 succeeded
	I0925 04:27:04.030578    6260 cache.go:157] /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.2 exists
	I0925 04:27:04.030635    6260 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.28.2" -> "/Users/jenkins/minikube-integration/17297-1010/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.2" took 5.590803125s
	I0925 04:27:04.030662    6260 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.28.2 -> /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.2 succeeded
	I0925 04:27:07.600638    6260 cache.go:157] /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.9-0 exists
	I0925 04:27:07.600689    6260 cache.go:96] cache image "registry.k8s.io/etcd:3.5.9-0" -> "/Users/jenkins/minikube-integration/17297-1010/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.9-0" took 9.160736417s
	I0925 04:27:07.600714    6260 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.9-0 -> /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.9-0 succeeded
	I0925 04:27:07.600755    6260 cache.go:87] Successfully saved all images to host disk.
	I0925 04:27:07.914788    6260 start.go:365] acquiring machines lock for no-preload-583000: {Name:mk918d99818e2cf21e5912bc291ed18d4b442ba6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 04:27:07.930581    6260 start.go:369] acquired machines lock for "no-preload-583000" in 15.738958ms
	I0925 04:27:07.930634    6260 start.go:93] Provisioning new machine with config: &{Name:no-preload-583000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.2 ClusterName:no-preload-583000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 04:27:07.930824    6260 start.go:125] createHost starting for "" (driver="qemu2")
	I0925 04:27:07.942920    6260 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0925 04:27:07.986384    6260 start.go:159] libmachine.API.Create for "no-preload-583000" (driver="qemu2")
	I0925 04:27:07.986435    6260 client.go:168] LocalClient.Create starting
	I0925 04:27:07.986536    6260 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/ca.pem
	I0925 04:27:07.986590    6260 main.go:141] libmachine: Decoding PEM data...
	I0925 04:27:07.986610    6260 main.go:141] libmachine: Parsing certificate...
	I0925 04:27:07.986674    6260 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/cert.pem
	I0925 04:27:07.986712    6260 main.go:141] libmachine: Decoding PEM data...
	I0925 04:27:07.986729    6260 main.go:141] libmachine: Parsing certificate...
	I0925 04:27:07.987220    6260 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17297-1010/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I0925 04:27:08.118362    6260 main.go:141] libmachine: Creating SSH key...
	I0925 04:27:08.253110    6260 main.go:141] libmachine: Creating Disk image...
	I0925 04:27:08.253122    6260 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0925 04:27:08.253294    6260 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/no-preload-583000/disk.qcow2.raw /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/no-preload-583000/disk.qcow2
	I0925 04:27:08.262344    6260 main.go:141] libmachine: STDOUT: 
	I0925 04:27:08.262373    6260 main.go:141] libmachine: STDERR: 
	I0925 04:27:08.262430    6260 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/no-preload-583000/disk.qcow2 +20000M
	I0925 04:27:08.271035    6260 main.go:141] libmachine: STDOUT: Image resized.
	
	I0925 04:27:08.271061    6260 main.go:141] libmachine: STDERR: 
	I0925 04:27:08.271073    6260 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/no-preload-583000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/no-preload-583000/disk.qcow2
	I0925 04:27:08.271083    6260 main.go:141] libmachine: Starting QEMU VM...
	I0925 04:27:08.271129    6260 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/no-preload-583000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/no-preload-583000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/no-preload-583000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:5b:22:11:cc:4f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/no-preload-583000/disk.qcow2
	I0925 04:27:08.272846    6260 main.go:141] libmachine: STDOUT: 
	I0925 04:27:08.272870    6260 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 04:27:08.272882    6260 client.go:171] LocalClient.Create took 286.441667ms
	I0925 04:27:10.275135    6260 start.go:128] duration metric: createHost completed in 2.344276875s
	I0925 04:27:10.275194    6260 start.go:83] releasing machines lock for "no-preload-583000", held for 2.344585667s
	W0925 04:27:10.275490    6260 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-583000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-583000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 04:27:10.289114    6260 out.go:177] 
	W0925 04:27:10.293234    6260 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0925 04:27:10.293284    6260 out.go:239] * 
	* 
	W0925 04:27:10.296068    6260 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0925 04:27:10.304048    6260 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-583000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.28.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-583000 -n no-preload-583000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-583000 -n no-preload-583000: exit status 7 (49.161667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-583000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (12.05s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-925000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-925000 create -f testdata/busybox.yaml: exit status 1 (30.09275ms)

                                                
                                                
** stderr ** 
	error: no openapi getter

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-925000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-925000 -n old-k8s-version-925000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-925000 -n old-k8s-version-925000: exit status 7 (31.311458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-925000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-925000 -n old-k8s-version-925000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-925000 -n old-k8s-version-925000: exit status 7 (31.262125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-925000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-925000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-925000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-925000 describe deploy/metrics-server -n kube-system: exit status 1 (26.599333ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-925000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-925000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-925000 -n old-k8s-version-925000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-925000 -n old-k8s-version-925000: exit status 7 (28.450583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-925000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (7.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-925000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-925000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0: exit status 80 (6.981111625s)

                                                
                                                
-- stdout --
	* [old-k8s-version-925000] minikube v1.31.2 on Darwin 13.6 (arm64)
	  - MINIKUBE_LOCATION=17297
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17297-1010/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17297-1010/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.2
	* Using the qemu2 driver based on existing profile
	* Starting control plane node old-k8s-version-925000 in cluster old-k8s-version-925000
	* Restarting existing qemu2 VM for "old-k8s-version-925000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-925000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 04:27:01.013432    6391 out.go:296] Setting OutFile to fd 1 ...
	I0925 04:27:01.013568    6391 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 04:27:01.013572    6391 out.go:309] Setting ErrFile to fd 2...
	I0925 04:27:01.013574    6391 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 04:27:01.013719    6391 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17297-1010/.minikube/bin
	I0925 04:27:01.014759    6391 out.go:303] Setting JSON to false
	I0925 04:27:01.030222    6391 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3396,"bootTime":1695637825,"procs":420,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.6","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0925 04:27:01.030315    6391 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0925 04:27:01.034703    6391 out.go:177] * [old-k8s-version-925000] minikube v1.31.2 on Darwin 13.6 (arm64)
	I0925 04:27:01.045713    6391 out.go:177]   - MINIKUBE_LOCATION=17297
	I0925 04:27:01.041602    6391 notify.go:220] Checking for updates...
	I0925 04:27:01.053671    6391 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17297-1010/kubeconfig
	I0925 04:27:01.060668    6391 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0925 04:27:01.067685    6391 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 04:27:01.074643    6391 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17297-1010/.minikube
	I0925 04:27:01.082718    6391 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0925 04:27:01.088049    6391 config.go:182] Loaded profile config "old-k8s-version-925000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0925 04:27:01.091683    6391 out.go:177] * Kubernetes 1.28.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.2
	I0925 04:27:01.095645    6391 driver.go:373] Setting default libvirt URI to qemu:///system
	I0925 04:27:01.099738    6391 out.go:177] * Using the qemu2 driver based on existing profile
	I0925 04:27:01.106708    6391 start.go:298] selected driver: qemu2
	I0925 04:27:01.106714    6391 start.go:902] validating driver "qemu2" against &{Name:old-k8s-version-925000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-925000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequ
ested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0925 04:27:01.106786    6391 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0925 04:27:01.108919    6391 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0925 04:27:01.108943    6391 cni.go:84] Creating CNI manager for ""
	I0925 04:27:01.108950    6391 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0925 04:27:01.108955    6391 start_flags.go:321] config:
	{Name:old-k8s-version-925000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-925000 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Use
rs:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0925 04:27:01.113062    6391 iso.go:125] acquiring lock: {Name:mkf881a60cf9fd1672567914305ff6f7a4f13809 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 04:27:01.120716    6391 out.go:177] * Starting control plane node old-k8s-version-925000 in cluster old-k8s-version-925000
	I0925 04:27:01.124747    6391 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0925 04:27:01.124763    6391 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0925 04:27:01.124774    6391 cache.go:57] Caching tarball of preloaded images
	I0925 04:27:01.124825    6391 preload.go:174] Found /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0925 04:27:01.124830    6391 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0925 04:27:01.124886    6391 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/old-k8s-version-925000/config.json ...
	I0925 04:27:01.125115    6391 start.go:365] acquiring machines lock for old-k8s-version-925000: {Name:mk918d99818e2cf21e5912bc291ed18d4b442ba6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 04:27:02.883035    6391 start.go:369] acquired machines lock for "old-k8s-version-925000" in 1.7578855s
	I0925 04:27:02.883104    6391 start.go:96] Skipping create...Using existing machine configuration
	I0925 04:27:02.883115    6391 fix.go:54] fixHost starting: 
	I0925 04:27:02.883421    6391 fix.go:102] recreateIfNeeded on old-k8s-version-925000: state=Stopped err=<nil>
	W0925 04:27:02.883444    6391 fix.go:128] unexpected machine state, will restart: <nil>
	I0925 04:27:02.891736    6391 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-925000" ...
	I0925 04:27:02.899736    6391 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/old-k8s-version-925000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/old-k8s-version-925000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/old-k8s-version-925000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:2d:2c:bf:cd:ff -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/old-k8s-version-925000/disk.qcow2
	I0925 04:27:02.903652    6391 main.go:141] libmachine: STDOUT: 
	I0925 04:27:02.903682    6391 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 04:27:02.903739    6391 fix.go:56] fixHost completed within 20.620417ms
	I0925 04:27:02.903751    6391 start.go:83] releasing machines lock for "old-k8s-version-925000", held for 20.693042ms
	W0925 04:27:02.903813    6391 start.go:688] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0925 04:27:02.903910    6391 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 04:27:02.903923    6391 start.go:703] Will try again in 5 seconds ...
	I0925 04:27:07.906203    6391 start.go:365] acquiring machines lock for old-k8s-version-925000: {Name:mk918d99818e2cf21e5912bc291ed18d4b442ba6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 04:27:07.906619    6391 start.go:369] acquired machines lock for "old-k8s-version-925000" in 339.125µs
	I0925 04:27:07.906751    6391 start.go:96] Skipping create...Using existing machine configuration
	I0925 04:27:07.906773    6391 fix.go:54] fixHost starting: 
	I0925 04:27:07.907524    6391 fix.go:102] recreateIfNeeded on old-k8s-version-925000: state=Stopped err=<nil>
	W0925 04:27:07.907551    6391 fix.go:128] unexpected machine state, will restart: <nil>
	I0925 04:27:07.917055    6391 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-925000" ...
	I0925 04:27:07.921198    6391 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/old-k8s-version-925000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/old-k8s-version-925000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/old-k8s-version-925000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:2d:2c:bf:cd:ff -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/old-k8s-version-925000/disk.qcow2
	I0925 04:27:07.930358    6391 main.go:141] libmachine: STDOUT: 
	I0925 04:27:07.930415    6391 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 04:27:07.930501    6391 fix.go:56] fixHost completed within 23.731833ms
	I0925 04:27:07.930519    6391 start.go:83] releasing machines lock for "old-k8s-version-925000", held for 23.881ms
	W0925 04:27:07.930670    6391 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-925000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-925000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 04:27:07.946035    6391 out.go:177] 
	W0925 04:27:07.949992    6391 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0925 04:27:07.950022    6391 out.go:239] * 
	* 
	W0925 04:27:07.951829    6391 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0925 04:27:07.960930    6391 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-925000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-925000 -n old-k8s-version-925000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-925000 -n old-k8s-version-925000: exit status 7 (47.460416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-925000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (7.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-925000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-925000 -n old-k8s-version-925000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-925000 -n old-k8s-version-925000: exit status 7 (32.057875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-925000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-925000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-925000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-925000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (29.322125ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-925000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-925000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-925000 -n old-k8s-version-925000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-925000 -n old-k8s-version-925000: exit status 7 (31.475125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-925000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p old-k8s-version-925000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p old-k8s-version-925000 "sudo crictl images -o json": exit status 89 (37.469916ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p old-k8s-version-925000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p old-k8s-version-925000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p old-k8s-version-925000"
start_stop_delete_test.go:304: v1.16.0 images missing (-want +got):
  []string{
- 	"k8s.gcr.io/coredns:1.6.2",
- 	"k8s.gcr.io/etcd:3.3.15-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.16.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.16.0",
- 	"k8s.gcr.io/kube-proxy:v1.16.0",
- 	"k8s.gcr.io/kube-scheduler:v1.16.0",
- 	"k8s.gcr.io/pause:3.1",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-925000 -n old-k8s-version-925000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-925000 -n old-k8s-version-925000: exit status 7 (27.62625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-925000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-925000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-925000 --alsologtostderr -v=1: exit status 89 (39.893875ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p old-k8s-version-925000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 04:27:08.203944    6414 out.go:296] Setting OutFile to fd 1 ...
	I0925 04:27:08.204310    6414 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 04:27:08.204314    6414 out.go:309] Setting ErrFile to fd 2...
	I0925 04:27:08.204317    6414 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 04:27:08.204458    6414 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17297-1010/.minikube/bin
	I0925 04:27:08.204687    6414 out.go:303] Setting JSON to false
	I0925 04:27:08.204697    6414 mustload.go:65] Loading cluster: old-k8s-version-925000
	I0925 04:27:08.204896    6414 config.go:182] Loaded profile config "old-k8s-version-925000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0925 04:27:08.209003    6414 out.go:177] * The control plane node must be running for this command
	I0925 04:27:08.213022    6414 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-925000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-925000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-925000 -n old-k8s-version-925000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-925000 -n old-k8s-version-925000: exit status 7 (27.22675ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-925000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-925000 -n old-k8s-version-925000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-925000 -n old-k8s-version-925000: exit status 7 (27.372042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-925000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (11.56s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-064000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-064000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.2: exit status 80 (11.510075958s)

                                                
                                                
-- stdout --
	* [embed-certs-064000] minikube v1.31.2 on Darwin 13.6 (arm64)
	  - MINIKUBE_LOCATION=17297
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17297-1010/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17297-1010/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node embed-certs-064000 in cluster embed-certs-064000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-064000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 04:27:08.662984    6440 out.go:296] Setting OutFile to fd 1 ...
	I0925 04:27:08.663328    6440 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 04:27:08.663332    6440 out.go:309] Setting ErrFile to fd 2...
	I0925 04:27:08.663335    6440 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 04:27:08.663519    6440 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17297-1010/.minikube/bin
	I0925 04:27:08.664934    6440 out.go:303] Setting JSON to false
	I0925 04:27:08.680434    6440 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3403,"bootTime":1695637825,"procs":419,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.6","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0925 04:27:08.680525    6440 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0925 04:27:08.684329    6440 out.go:177] * [embed-certs-064000] minikube v1.31.2 on Darwin 13.6 (arm64)
	I0925 04:27:08.691249    6440 out.go:177]   - MINIKUBE_LOCATION=17297
	I0925 04:27:08.694274    6440 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17297-1010/kubeconfig
	I0925 04:27:08.691333    6440 notify.go:220] Checking for updates...
	I0925 04:27:08.697254    6440 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0925 04:27:08.700195    6440 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 04:27:08.703218    6440 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17297-1010/.minikube
	I0925 04:27:08.706249    6440 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0925 04:27:08.709582    6440 config.go:182] Loaded profile config "multinode-352000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0925 04:27:08.709650    6440 config.go:182] Loaded profile config "no-preload-583000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0925 04:27:08.709702    6440 driver.go:373] Setting default libvirt URI to qemu:///system
	I0925 04:27:08.714225    6440 out.go:177] * Using the qemu2 driver based on user configuration
	I0925 04:27:08.721130    6440 start.go:298] selected driver: qemu2
	I0925 04:27:08.721139    6440 start.go:902] validating driver "qemu2" against <nil>
	I0925 04:27:08.721145    6440 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0925 04:27:08.723161    6440 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0925 04:27:08.726197    6440 out.go:177] * Automatically selected the socket_vmnet network
	I0925 04:27:08.729307    6440 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0925 04:27:08.729325    6440 cni.go:84] Creating CNI manager for ""
	I0925 04:27:08.729333    6440 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0925 04:27:08.729337    6440 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0925 04:27:08.729343    6440 start_flags.go:321] config:
	{Name:embed-certs-064000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:embed-certs-064000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SS
HAgentPID:0 AutoPauseInterval:1m0s}
	I0925 04:27:08.733514    6440 iso.go:125] acquiring lock: {Name:mkf881a60cf9fd1672567914305ff6f7a4f13809 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 04:27:08.741162    6440 out.go:177] * Starting control plane node embed-certs-064000 in cluster embed-certs-064000
	I0925 04:27:08.745252    6440 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0925 04:27:08.745271    6440 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I0925 04:27:08.745285    6440 cache.go:57] Caching tarball of preloaded images
	I0925 04:27:08.745345    6440 preload.go:174] Found /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0925 04:27:08.745351    6440 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I0925 04:27:08.745411    6440 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/embed-certs-064000/config.json ...
	I0925 04:27:08.745424    6440 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/embed-certs-064000/config.json: {Name:mk7e4b629c7583140ca134d203fa9071f3f7904b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 04:27:08.745616    6440 start.go:365] acquiring machines lock for embed-certs-064000: {Name:mk918d99818e2cf21e5912bc291ed18d4b442ba6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 04:27:10.275294    6440 start.go:369] acquired machines lock for "embed-certs-064000" in 1.529650208s
	I0925 04:27:10.275518    6440 start.go:93] Provisioning new machine with config: &{Name:embed-certs-064000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.2 ClusterName:embed-certs-064000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mount
MSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 04:27:10.275753    6440 start.go:125] createHost starting for "" (driver="qemu2")
	I0925 04:27:10.285128    6440 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0925 04:27:10.332876    6440 start.go:159] libmachine.API.Create for "embed-certs-064000" (driver="qemu2")
	I0925 04:27:10.332936    6440 client.go:168] LocalClient.Create starting
	I0925 04:27:10.333076    6440 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/ca.pem
	I0925 04:27:10.333128    6440 main.go:141] libmachine: Decoding PEM data...
	I0925 04:27:10.333147    6440 main.go:141] libmachine: Parsing certificate...
	I0925 04:27:10.333218    6440 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/cert.pem
	I0925 04:27:10.333258    6440 main.go:141] libmachine: Decoding PEM data...
	I0925 04:27:10.333276    6440 main.go:141] libmachine: Parsing certificate...
	I0925 04:27:10.333875    6440 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17297-1010/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I0925 04:27:10.469067    6440 main.go:141] libmachine: Creating SSH key...
	I0925 04:27:10.730487    6440 main.go:141] libmachine: Creating Disk image...
	I0925 04:27:10.730499    6440 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0925 04:27:10.730646    6440 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/embed-certs-064000/disk.qcow2.raw /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/embed-certs-064000/disk.qcow2
	I0925 04:27:10.739697    6440 main.go:141] libmachine: STDOUT: 
	I0925 04:27:10.739723    6440 main.go:141] libmachine: STDERR: 
	I0925 04:27:10.739816    6440 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/embed-certs-064000/disk.qcow2 +20000M
	I0925 04:27:10.747943    6440 main.go:141] libmachine: STDOUT: Image resized.
	
	I0925 04:27:10.747968    6440 main.go:141] libmachine: STDERR: 
	I0925 04:27:10.747993    6440 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/embed-certs-064000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/embed-certs-064000/disk.qcow2
	I0925 04:27:10.748002    6440 main.go:141] libmachine: Starting QEMU VM...
	I0925 04:27:10.748051    6440 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/embed-certs-064000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/embed-certs-064000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/embed-certs-064000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:cf:09:d1:ec:d5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/embed-certs-064000/disk.qcow2
	I0925 04:27:10.749719    6440 main.go:141] libmachine: STDOUT: 
	I0925 04:27:10.749733    6440 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 04:27:10.749755    6440 client.go:171] LocalClient.Create took 416.809584ms
	I0925 04:27:12.752022    6440 start.go:128] duration metric: createHost completed in 2.476237709s
	I0925 04:27:12.752080    6440 start.go:83] releasing machines lock for "embed-certs-064000", held for 2.476749625s
	W0925 04:27:12.752125    6440 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 04:27:12.769419    6440 out.go:177] * Deleting "embed-certs-064000" in qemu2 ...
	W0925 04:27:12.788806    6440 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 04:27:12.788831    6440 start.go:703] Will try again in 5 seconds ...
	I0925 04:27:17.790968    6440 start.go:365] acquiring machines lock for embed-certs-064000: {Name:mk918d99818e2cf21e5912bc291ed18d4b442ba6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 04:27:17.801834    6440 start.go:369] acquired machines lock for "embed-certs-064000" in 10.79975ms
	I0925 04:27:17.801879    6440 start.go:93] Provisioning new machine with config: &{Name:embed-certs-064000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.2 ClusterName:embed-certs-064000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mount
MSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 04:27:17.802052    6440 start.go:125] createHost starting for "" (driver="qemu2")
	I0925 04:27:17.814482    6440 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0925 04:27:17.858326    6440 start.go:159] libmachine.API.Create for "embed-certs-064000" (driver="qemu2")
	I0925 04:27:17.858367    6440 client.go:168] LocalClient.Create starting
	I0925 04:27:17.858488    6440 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/ca.pem
	I0925 04:27:17.858529    6440 main.go:141] libmachine: Decoding PEM data...
	I0925 04:27:17.858545    6440 main.go:141] libmachine: Parsing certificate...
	I0925 04:27:17.858634    6440 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/cert.pem
	I0925 04:27:17.858668    6440 main.go:141] libmachine: Decoding PEM data...
	I0925 04:27:17.858681    6440 main.go:141] libmachine: Parsing certificate...
	I0925 04:27:17.859149    6440 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17297-1010/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I0925 04:27:17.990630    6440 main.go:141] libmachine: Creating SSH key...
	I0925 04:27:18.086250    6440 main.go:141] libmachine: Creating Disk image...
	I0925 04:27:18.086261    6440 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0925 04:27:18.086412    6440 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/embed-certs-064000/disk.qcow2.raw /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/embed-certs-064000/disk.qcow2
	I0925 04:27:18.096260    6440 main.go:141] libmachine: STDOUT: 
	I0925 04:27:18.096284    6440 main.go:141] libmachine: STDERR: 
	I0925 04:27:18.096354    6440 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/embed-certs-064000/disk.qcow2 +20000M
	I0925 04:27:18.104043    6440 main.go:141] libmachine: STDOUT: Image resized.
	
	I0925 04:27:18.104067    6440 main.go:141] libmachine: STDERR: 
	I0925 04:27:18.104079    6440 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/embed-certs-064000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/embed-certs-064000/disk.qcow2
	I0925 04:27:18.104089    6440 main.go:141] libmachine: Starting QEMU VM...
	I0925 04:27:18.104151    6440 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/embed-certs-064000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/embed-certs-064000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/embed-certs-064000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:56:eb:65:33:a3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/embed-certs-064000/disk.qcow2
	I0925 04:27:18.105938    6440 main.go:141] libmachine: STDOUT: 
	I0925 04:27:18.105955    6440 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 04:27:18.105970    6440 client.go:171] LocalClient.Create took 247.597125ms
	I0925 04:27:20.108260    6440 start.go:128] duration metric: createHost completed in 2.306148416s
	I0925 04:27:20.108346    6440 start.go:83] releasing machines lock for "embed-certs-064000", held for 2.3064875s
	W0925 04:27:20.108741    6440 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-064000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-064000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 04:27:20.122241    6440 out.go:177] 
	W0925 04:27:20.126363    6440 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0925 04:27:20.126396    6440 out.go:239] * 
	* 
	W0925 04:27:20.128411    6440 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0925 04:27:20.138207    6440 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-064000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-064000 -n embed-certs-064000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-064000 -n embed-certs-064000: exit status 7 (47.182333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-064000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (11.56s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-583000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-583000 create -f testdata/busybox.yaml: exit status 1 (30.410667ms)

                                                
                                                
** stderr ** 
	error: no openapi getter

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-583000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-583000 -n no-preload-583000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-583000 -n no-preload-583000: exit status 7 (31.24525ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-583000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-583000 -n no-preload-583000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-583000 -n no-preload-583000: exit status 7 (31.117083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-583000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-583000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-583000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-583000 describe deploy/metrics-server -n kube-system: exit status 1 (26.36975ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-583000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-583000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-583000 -n no-preload-583000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-583000 -n no-preload-583000: exit status 7 (28.063417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-583000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (7.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-583000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.28.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-583000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.28.2: exit status 80 (7.118358708s)

                                                
                                                
-- stdout --
	* [no-preload-583000] minikube v1.31.2 on Darwin 13.6 (arm64)
	  - MINIKUBE_LOCATION=17297
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17297-1010/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17297-1010/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node no-preload-583000 in cluster no-preload-583000
	* Restarting existing qemu2 VM for "no-preload-583000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-583000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 04:27:10.747447    6465 out.go:296] Setting OutFile to fd 1 ...
	I0925 04:27:10.747560    6465 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 04:27:10.747564    6465 out.go:309] Setting ErrFile to fd 2...
	I0925 04:27:10.747567    6465 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 04:27:10.747705    6465 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17297-1010/.minikube/bin
	I0925 04:27:10.748687    6465 out.go:303] Setting JSON to false
	I0925 04:27:10.764087    6465 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3405,"bootTime":1695637825,"procs":419,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.6","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0925 04:27:10.764168    6465 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0925 04:27:10.769368    6465 out.go:177] * [no-preload-583000] minikube v1.31.2 on Darwin 13.6 (arm64)
	I0925 04:27:10.775317    6465 out.go:177]   - MINIKUBE_LOCATION=17297
	I0925 04:27:10.775385    6465 notify.go:220] Checking for updates...
	I0925 04:27:10.779340    6465 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17297-1010/kubeconfig
	I0925 04:27:10.782436    6465 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0925 04:27:10.785370    6465 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 04:27:10.788406    6465 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17297-1010/.minikube
	I0925 04:27:10.791383    6465 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0925 04:27:10.793034    6465 config.go:182] Loaded profile config "no-preload-583000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0925 04:27:10.793290    6465 driver.go:373] Setting default libvirt URI to qemu:///system
	I0925 04:27:10.797335    6465 out.go:177] * Using the qemu2 driver based on existing profile
	I0925 04:27:10.804128    6465 start.go:298] selected driver: qemu2
	I0925 04:27:10.804134    6465 start.go:902] validating driver "qemu2" against &{Name:no-preload-583000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.2 ClusterName:no-preload-583000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:
false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0925 04:27:10.804189    6465 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0925 04:27:10.806174    6465 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0925 04:27:10.806197    6465 cni.go:84] Creating CNI manager for ""
	I0925 04:27:10.806205    6465 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0925 04:27:10.806211    6465 start_flags.go:321] config:
	{Name:no-preload-583000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:no-preload-583000 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/mi
nikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0925 04:27:10.810160    6465 iso.go:125] acquiring lock: {Name:mkf881a60cf9fd1672567914305ff6f7a4f13809 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 04:27:10.817355    6465 out.go:177] * Starting control plane node no-preload-583000 in cluster no-preload-583000
	I0925 04:27:10.821320    6465 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0925 04:27:10.821378    6465 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/no-preload-583000/config.json ...
	I0925 04:27:10.821397    6465 cache.go:107] acquiring lock: {Name:mkabf7fabdeaff7e666ac8f9deef5b56be85207e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 04:27:10.821401    6465 cache.go:107] acquiring lock: {Name:mk713969746b190f5004b9db04e7550ca40ba84a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 04:27:10.821412    6465 cache.go:107] acquiring lock: {Name:mk0f31a6ee309e91a5a1fb87075eb50919e7f68f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 04:27:10.821453    6465 cache.go:115] /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0925 04:27:10.821454    6465 cache.go:115] /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.2 exists
	I0925 04:27:10.821458    6465 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/17297-1010/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 63.916µs
	I0925 04:27:10.821463    6465 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0925 04:27:10.821461    6465 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.28.2" -> "/Users/jenkins/minikube-integration/17297-1010/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.2" took 67.75µs
	I0925 04:27:10.821466    6465 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.28.2 -> /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.2 succeeded
	I0925 04:27:10.821469    6465 cache.go:115] /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.2 exists
	I0925 04:27:10.821468    6465 cache.go:107] acquiring lock: {Name:mk8c56e5ef32d4432d084bb256d3f7f9e4101f20 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 04:27:10.821472    6465 cache.go:107] acquiring lock: {Name:mk0b2600468dfd003d8ca8312eece64600df4ad2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 04:27:10.821480    6465 cache.go:107] acquiring lock: {Name:mkc25bb3873a1a8dccb3626dd27ed375863a57f7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 04:27:10.821507    6465 cache.go:115] /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0925 04:27:10.821545    6465 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/17297-1010/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 74µs
	I0925 04:27:10.821550    6465 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0925 04:27:10.821515    6465 cache.go:115] /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.2 exists
	I0925 04:27:10.821553    6465 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.28.2" -> "/Users/jenkins/minikube-integration/17297-1010/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.2" took 86.292µs
	I0925 04:27:10.821558    6465 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.28.2 -> /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.2 succeeded
	I0925 04:27:10.821517    6465 cache.go:115] /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.2 exists
	I0925 04:27:10.821562    6465 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.28.2" -> "/Users/jenkins/minikube-integration/17297-1010/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.2" took 82.542µs
	I0925 04:27:10.821565    6465 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.28.2 -> /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.2 succeeded
	I0925 04:27:10.821474    6465 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.28.2" -> "/Users/jenkins/minikube-integration/17297-1010/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.2" took 70.75µs
	I0925 04:27:10.821568    6465 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.28.2 -> /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.2 succeeded
	I0925 04:27:10.821525    6465 cache.go:107] acquiring lock: {Name:mk118c23adcb7020b052a077ebb1b9e32a9e6d36 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 04:27:10.821568    6465 cache.go:107] acquiring lock: {Name:mk23237034098747647e04602b84b61998414bf4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 04:27:10.821613    6465 cache.go:115] /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.9-0 exists
	I0925 04:27:10.821617    6465 cache.go:96] cache image "registry.k8s.io/etcd:3.5.9-0" -> "/Users/jenkins/minikube-integration/17297-1010/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.9-0" took 92.291µs
	I0925 04:27:10.821621    6465 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.9-0 -> /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.9-0 succeeded
	I0925 04:27:10.821632    6465 cache.go:115] /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 exists
	I0925 04:27:10.821636    6465 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.10.1" -> "/Users/jenkins/minikube-integration/17297-1010/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1" took 79.958µs
	I0925 04:27:10.821641    6465 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.10.1 -> /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 succeeded
	I0925 04:27:10.821649    6465 cache.go:87] Successfully saved all images to host disk.
	I0925 04:27:10.821682    6465 start.go:365] acquiring machines lock for no-preload-583000: {Name:mk918d99818e2cf21e5912bc291ed18d4b442ba6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 04:27:12.752198    6465 start.go:369] acquired machines lock for "no-preload-583000" in 1.930489625s
	I0925 04:27:12.752376    6465 start.go:96] Skipping create...Using existing machine configuration
	I0925 04:27:12.752398    6465 fix.go:54] fixHost starting: 
	I0925 04:27:12.753010    6465 fix.go:102] recreateIfNeeded on no-preload-583000: state=Stopped err=<nil>
	W0925 04:27:12.753058    6465 fix.go:128] unexpected machine state, will restart: <nil>
	I0925 04:27:12.761450    6465 out.go:177] * Restarting existing qemu2 VM for "no-preload-583000" ...
	I0925 04:27:12.772610    6465 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/no-preload-583000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/no-preload-583000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/no-preload-583000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:5b:22:11:cc:4f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/no-preload-583000/disk.qcow2
	I0925 04:27:12.780154    6465 main.go:141] libmachine: STDOUT: 
	I0925 04:27:12.780218    6465 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 04:27:12.780328    6465 fix.go:56] fixHost completed within 27.930916ms
	I0925 04:27:12.780346    6465 start.go:83] releasing machines lock for "no-preload-583000", held for 28.111625ms
	W0925 04:27:12.780374    6465 start.go:688] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0925 04:27:12.780490    6465 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 04:27:12.780503    6465 start.go:703] Will try again in 5 seconds ...
	I0925 04:27:17.782797    6465 start.go:365] acquiring machines lock for no-preload-583000: {Name:mk918d99818e2cf21e5912bc291ed18d4b442ba6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 04:27:17.783287    6465 start.go:369] acquired machines lock for "no-preload-583000" in 391.958µs
	I0925 04:27:17.783444    6465 start.go:96] Skipping create...Using existing machine configuration
	I0925 04:27:17.783468    6465 fix.go:54] fixHost starting: 
	I0925 04:27:17.784205    6465 fix.go:102] recreateIfNeeded on no-preload-583000: state=Stopped err=<nil>
	W0925 04:27:17.784232    6465 fix.go:128] unexpected machine state, will restart: <nil>
	I0925 04:27:17.788655    6465 out.go:177] * Restarting existing qemu2 VM for "no-preload-583000" ...
	I0925 04:27:17.792824    6465 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/no-preload-583000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/no-preload-583000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/no-preload-583000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:5b:22:11:cc:4f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/no-preload-583000/disk.qcow2
	I0925 04:27:17.801617    6465 main.go:141] libmachine: STDOUT: 
	I0925 04:27:17.801675    6465 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 04:27:17.801752    6465 fix.go:56] fixHost completed within 18.287875ms
	I0925 04:27:17.801771    6465 start.go:83] releasing machines lock for "no-preload-583000", held for 18.459ms
	W0925 04:27:17.801964    6465 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-583000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-583000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 04:27:17.817626    6465 out.go:177] 
	W0925 04:27:17.821692    6465 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0925 04:27:17.821724    6465 out.go:239] * 
	* 
	W0925 04:27:17.823758    6465 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0925 04:27:17.830579    6465 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-583000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.28.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-583000 -n no-preload-583000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-583000 -n no-preload-583000: exit status 7 (47.613875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-583000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (7.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-583000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-583000 -n no-preload-583000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-583000 -n no-preload-583000: exit status 7 (32.529ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-583000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-583000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-583000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-583000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (28.493917ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-583000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-583000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-583000 -n no-preload-583000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-583000 -n no-preload-583000: exit status 7 (31.185833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-583000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p no-preload-583000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p no-preload-583000 "sudo crictl images -o json": exit status 89 (40.586166ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p no-preload-583000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p no-preload-583000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p no-preload-583000"
start_stop_delete_test.go:304: v1.28.2 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.9-0",
- 	"registry.k8s.io/kube-apiserver:v1.28.2",
- 	"registry.k8s.io/kube-controller-manager:v1.28.2",
- 	"registry.k8s.io/kube-proxy:v1.28.2",
- 	"registry.k8s.io/kube-scheduler:v1.28.2",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-583000 -n no-preload-583000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-583000 -n no-preload-583000: exit status 7 (27.440208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-583000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-583000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-583000 --alsologtostderr -v=1: exit status 89 (44.191ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p no-preload-583000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 04:27:18.076857    6490 out.go:296] Setting OutFile to fd 1 ...
	I0925 04:27:18.076998    6490 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 04:27:18.077001    6490 out.go:309] Setting ErrFile to fd 2...
	I0925 04:27:18.077003    6490 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 04:27:18.077141    6490 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17297-1010/.minikube/bin
	I0925 04:27:18.077387    6490 out.go:303] Setting JSON to false
	I0925 04:27:18.077397    6490 mustload.go:65] Loading cluster: no-preload-583000
	I0925 04:27:18.077586    6490 config.go:182] Loaded profile config "no-preload-583000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0925 04:27:18.081554    6490 out.go:177] * The control plane node must be running for this command
	I0925 04:27:18.090601    6490 out.go:177]   To start a cluster, run: "minikube start -p no-preload-583000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-583000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-583000 -n no-preload-583000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-583000 -n no-preload-583000: exit status 7 (26.283875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-583000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-583000 -n no-preload-583000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-583000 -n no-preload-583000: exit status 7 (26.949375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-583000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (11.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-941000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-941000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.2: exit status 80 (10.989778875s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-941000] minikube v1.31.2 on Darwin 13.6 (arm64)
	  - MINIKUBE_LOCATION=17297
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17297-1010/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17297-1010/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node default-k8s-diff-port-941000 in cluster default-k8s-diff-port-941000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-941000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 04:27:18.755836    6528 out.go:296] Setting OutFile to fd 1 ...
	I0925 04:27:18.755965    6528 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 04:27:18.755968    6528 out.go:309] Setting ErrFile to fd 2...
	I0925 04:27:18.755971    6528 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 04:27:18.756106    6528 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17297-1010/.minikube/bin
	I0925 04:27:18.757149    6528 out.go:303] Setting JSON to false
	I0925 04:27:18.772516    6528 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3413,"bootTime":1695637825,"procs":416,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.6","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0925 04:27:18.772597    6528 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0925 04:27:18.777235    6528 out.go:177] * [default-k8s-diff-port-941000] minikube v1.31.2 on Darwin 13.6 (arm64)
	I0925 04:27:18.784154    6528 out.go:177]   - MINIKUBE_LOCATION=17297
	I0925 04:27:18.788201    6528 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17297-1010/kubeconfig
	I0925 04:27:18.784219    6528 notify.go:220] Checking for updates...
	I0925 04:27:18.792242    6528 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0925 04:27:18.795204    6528 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 04:27:18.798229    6528 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17297-1010/.minikube
	I0925 04:27:18.801288    6528 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0925 04:27:18.805643    6528 config.go:182] Loaded profile config "embed-certs-064000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0925 04:27:18.805714    6528 config.go:182] Loaded profile config "multinode-352000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0925 04:27:18.805758    6528 driver.go:373] Setting default libvirt URI to qemu:///system
	I0925 04:27:18.810182    6528 out.go:177] * Using the qemu2 driver based on user configuration
	I0925 04:27:18.817249    6528 start.go:298] selected driver: qemu2
	I0925 04:27:18.817255    6528 start.go:902] validating driver "qemu2" against <nil>
	I0925 04:27:18.817261    6528 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0925 04:27:18.819306    6528 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0925 04:27:18.822201    6528 out.go:177] * Automatically selected the socket_vmnet network
	I0925 04:27:18.825355    6528 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0925 04:27:18.825390    6528 cni.go:84] Creating CNI manager for ""
	I0925 04:27:18.825400    6528 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0925 04:27:18.825404    6528 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0925 04:27:18.825410    6528 start_flags.go:321] config:
	{Name:default-k8s-diff-port-941000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:default-k8s-diff-port-941000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Stat
icIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0925 04:27:18.829625    6528 iso.go:125] acquiring lock: {Name:mkf881a60cf9fd1672567914305ff6f7a4f13809 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 04:27:18.836256    6528 out.go:177] * Starting control plane node default-k8s-diff-port-941000 in cluster default-k8s-diff-port-941000
	I0925 04:27:18.840081    6528 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0925 04:27:18.840099    6528 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I0925 04:27:18.840110    6528 cache.go:57] Caching tarball of preloaded images
	I0925 04:27:18.840170    6528 preload.go:174] Found /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0925 04:27:18.840176    6528 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I0925 04:27:18.840251    6528 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/default-k8s-diff-port-941000/config.json ...
	I0925 04:27:18.840266    6528 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/default-k8s-diff-port-941000/config.json: {Name:mk06c65f200ec7dc310c5871cccfe5c0a8e772f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 04:27:18.840464    6528 start.go:365] acquiring machines lock for default-k8s-diff-port-941000: {Name:mk918d99818e2cf21e5912bc291ed18d4b442ba6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 04:27:20.108450    6528 start.go:369] acquired machines lock for "default-k8s-diff-port-941000" in 1.2679605s
	I0925 04:27:20.108616    6528 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-941000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22
KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:default-k8s-diff-port-941000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:
docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 04:27:20.108893    6528 start.go:125] createHost starting for "" (driver="qemu2")
	I0925 04:27:20.118278    6528 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0925 04:27:20.163493    6528 start.go:159] libmachine.API.Create for "default-k8s-diff-port-941000" (driver="qemu2")
	I0925 04:27:20.163542    6528 client.go:168] LocalClient.Create starting
	I0925 04:27:20.163641    6528 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/ca.pem
	I0925 04:27:20.163691    6528 main.go:141] libmachine: Decoding PEM data...
	I0925 04:27:20.163714    6528 main.go:141] libmachine: Parsing certificate...
	I0925 04:27:20.163776    6528 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/cert.pem
	I0925 04:27:20.163811    6528 main.go:141] libmachine: Decoding PEM data...
	I0925 04:27:20.163825    6528 main.go:141] libmachine: Parsing certificate...
	I0925 04:27:20.164446    6528 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17297-1010/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I0925 04:27:20.302545    6528 main.go:141] libmachine: Creating SSH key...
	I0925 04:27:20.334320    6528 main.go:141] libmachine: Creating Disk image...
	I0925 04:27:20.334327    6528 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0925 04:27:20.334461    6528 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/default-k8s-diff-port-941000/disk.qcow2.raw /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/default-k8s-diff-port-941000/disk.qcow2
	I0925 04:27:20.342925    6528 main.go:141] libmachine: STDOUT: 
	I0925 04:27:20.342948    6528 main.go:141] libmachine: STDERR: 
	I0925 04:27:20.342997    6528 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/default-k8s-diff-port-941000/disk.qcow2 +20000M
	I0925 04:27:20.350729    6528 main.go:141] libmachine: STDOUT: Image resized.
	
	I0925 04:27:20.350753    6528 main.go:141] libmachine: STDERR: 
	I0925 04:27:20.350773    6528 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/default-k8s-diff-port-941000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/default-k8s-diff-port-941000/disk.qcow2
	I0925 04:27:20.350782    6528 main.go:141] libmachine: Starting QEMU VM...
	I0925 04:27:20.350811    6528 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/default-k8s-diff-port-941000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/default-k8s-diff-port-941000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/default-k8s-diff-port-941000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:23:e8:2e:5e:a6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/default-k8s-diff-port-941000/disk.qcow2
	I0925 04:27:20.352381    6528 main.go:141] libmachine: STDOUT: 
	I0925 04:27:20.352400    6528 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 04:27:20.352421    6528 client.go:171] LocalClient.Create took 188.871333ms
	I0925 04:27:22.354621    6528 start.go:128] duration metric: createHost completed in 2.245693041s
	I0925 04:27:22.354698    6528 start.go:83] releasing machines lock for "default-k8s-diff-port-941000", held for 2.246208167s
	W0925 04:27:22.354806    6528 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 04:27:22.368364    6528 out.go:177] * Deleting "default-k8s-diff-port-941000" in qemu2 ...
	W0925 04:27:22.391947    6528 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 04:27:22.391983    6528 start.go:703] Will try again in 5 seconds ...
	I0925 04:27:27.394178    6528 start.go:365] acquiring machines lock for default-k8s-diff-port-941000: {Name:mk918d99818e2cf21e5912bc291ed18d4b442ba6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 04:27:27.404890    6528 start.go:369] acquired machines lock for "default-k8s-diff-port-941000" in 10.640542ms
	I0925 04:27:27.404941    6528 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-941000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22
KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:default-k8s-diff-port-941000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:
docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 04:27:27.405096    6528 start.go:125] createHost starting for "" (driver="qemu2")
	I0925 04:27:27.413184    6528 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0925 04:27:27.456220    6528 start.go:159] libmachine.API.Create for "default-k8s-diff-port-941000" (driver="qemu2")
	I0925 04:27:27.456270    6528 client.go:168] LocalClient.Create starting
	I0925 04:27:27.456394    6528 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/ca.pem
	I0925 04:27:27.456455    6528 main.go:141] libmachine: Decoding PEM data...
	I0925 04:27:27.456472    6528 main.go:141] libmachine: Parsing certificate...
	I0925 04:27:27.456537    6528 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/cert.pem
	I0925 04:27:27.456572    6528 main.go:141] libmachine: Decoding PEM data...
	I0925 04:27:27.456584    6528 main.go:141] libmachine: Parsing certificate...
	I0925 04:27:27.457075    6528 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17297-1010/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I0925 04:27:27.592002    6528 main.go:141] libmachine: Creating SSH key...
	I0925 04:27:27.659811    6528 main.go:141] libmachine: Creating Disk image...
	I0925 04:27:27.659822    6528 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0925 04:27:27.660002    6528 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/default-k8s-diff-port-941000/disk.qcow2.raw /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/default-k8s-diff-port-941000/disk.qcow2
	I0925 04:27:27.669033    6528 main.go:141] libmachine: STDOUT: 
	I0925 04:27:27.669054    6528 main.go:141] libmachine: STDERR: 
	I0925 04:27:27.669123    6528 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/default-k8s-diff-port-941000/disk.qcow2 +20000M
	I0925 04:27:27.677539    6528 main.go:141] libmachine: STDOUT: Image resized.
	
	I0925 04:27:27.677558    6528 main.go:141] libmachine: STDERR: 
	I0925 04:27:27.677573    6528 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/default-k8s-diff-port-941000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/default-k8s-diff-port-941000/disk.qcow2
	I0925 04:27:27.677584    6528 main.go:141] libmachine: Starting QEMU VM...
	I0925 04:27:27.677633    6528 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/default-k8s-diff-port-941000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/default-k8s-diff-port-941000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/default-k8s-diff-port-941000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:39:dc:53:9b:c5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/default-k8s-diff-port-941000/disk.qcow2
	I0925 04:27:27.679322    6528 main.go:141] libmachine: STDOUT: 
	I0925 04:27:27.679336    6528 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 04:27:27.679349    6528 client.go:171] LocalClient.Create took 223.073375ms
	I0925 04:27:29.681561    6528 start.go:128] duration metric: createHost completed in 2.276435417s
	I0925 04:27:29.681641    6528 start.go:83] releasing machines lock for "default-k8s-diff-port-941000", held for 2.276725834s
	W0925 04:27:29.681984    6528 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-941000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-941000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 04:27:29.695525    6528 out.go:177] 
	W0925 04:27:29.699754    6528 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0925 04:27:29.699785    6528 out.go:239] * 
	* 
	W0925 04:27:29.702343    6528 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0925 04:27:29.711653    6528 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-941000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-941000 -n default-k8s-diff-port-941000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-941000 -n default-k8s-diff-port-941000: exit status 7 (48.893916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-941000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (11.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-064000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-064000 create -f testdata/busybox.yaml: exit status 1 (30.061583ms)

                                                
                                                
** stderr ** 
	error: no openapi getter

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-064000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-064000 -n embed-certs-064000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-064000 -n embed-certs-064000: exit status 7 (31.254375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-064000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-064000 -n embed-certs-064000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-064000 -n embed-certs-064000: exit status 7 (30.921333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-064000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-064000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-064000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-064000 describe deploy/metrics-server -n kube-system: exit status 1 (25.466334ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-064000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-064000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-064000 -n embed-certs-064000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-064000 -n embed-certs-064000: exit status 7 (26.909541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-064000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (6.94s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-064000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-064000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.2: exit status 80 (6.890855042s)

                                                
                                                
-- stdout --
	* [embed-certs-064000] minikube v1.31.2 on Darwin 13.6 (arm64)
	  - MINIKUBE_LOCATION=17297
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17297-1010/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17297-1010/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node embed-certs-064000 in cluster embed-certs-064000
	* Restarting existing qemu2 VM for "embed-certs-064000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-064000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 04:27:20.576501    6558 out.go:296] Setting OutFile to fd 1 ...
	I0925 04:27:20.576637    6558 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 04:27:20.576640    6558 out.go:309] Setting ErrFile to fd 2...
	I0925 04:27:20.576643    6558 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 04:27:20.576769    6558 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17297-1010/.minikube/bin
	I0925 04:27:20.577749    6558 out.go:303] Setting JSON to false
	I0925 04:27:20.592711    6558 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3415,"bootTime":1695637825,"procs":418,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.6","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0925 04:27:20.592796    6558 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0925 04:27:20.596243    6558 out.go:177] * [embed-certs-064000] minikube v1.31.2 on Darwin 13.6 (arm64)
	I0925 04:27:20.607217    6558 out.go:177]   - MINIKUBE_LOCATION=17297
	I0925 04:27:20.603363    6558 notify.go:220] Checking for updates...
	I0925 04:27:20.614301    6558 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17297-1010/kubeconfig
	I0925 04:27:20.618201    6558 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0925 04:27:20.621255    6558 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 04:27:20.624303    6558 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17297-1010/.minikube
	I0925 04:27:20.627171    6558 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0925 04:27:20.630522    6558 config.go:182] Loaded profile config "embed-certs-064000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0925 04:27:20.630780    6558 driver.go:373] Setting default libvirt URI to qemu:///system
	I0925 04:27:20.635201    6558 out.go:177] * Using the qemu2 driver based on existing profile
	I0925 04:27:20.642277    6558 start.go:298] selected driver: qemu2
	I0925 04:27:20.642285    6558 start.go:902] validating driver "qemu2" against &{Name:embed-certs-064000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.2 ClusterName:embed-certs-064000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested
:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0925 04:27:20.642355    6558 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0925 04:27:20.644555    6558 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0925 04:27:20.644579    6558 cni.go:84] Creating CNI manager for ""
	I0925 04:27:20.644587    6558 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0925 04:27:20.644594    6558 start_flags.go:321] config:
	{Name:embed-certs-064000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:embed-certs-064000 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/m
inikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0925 04:27:20.648888    6558 iso.go:125] acquiring lock: {Name:mkf881a60cf9fd1672567914305ff6f7a4f13809 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 04:27:20.656251    6558 out.go:177] * Starting control plane node embed-certs-064000 in cluster embed-certs-064000
	I0925 04:27:20.660272    6558 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0925 04:27:20.660308    6558 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I0925 04:27:20.660327    6558 cache.go:57] Caching tarball of preloaded images
	I0925 04:27:20.660388    6558 preload.go:174] Found /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0925 04:27:20.660393    6558 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I0925 04:27:20.660439    6558 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/embed-certs-064000/config.json ...
	I0925 04:27:20.660858    6558 start.go:365] acquiring machines lock for embed-certs-064000: {Name:mk918d99818e2cf21e5912bc291ed18d4b442ba6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 04:27:22.354818    6558 start.go:369] acquired machines lock for "embed-certs-064000" in 1.693933333s
	I0925 04:27:22.355029    6558 start.go:96] Skipping create...Using existing machine configuration
	I0925 04:27:22.355078    6558 fix.go:54] fixHost starting: 
	I0925 04:27:22.355775    6558 fix.go:102] recreateIfNeeded on embed-certs-064000: state=Stopped err=<nil>
	W0925 04:27:22.355838    6558 fix.go:128] unexpected machine state, will restart: <nil>
	I0925 04:27:22.365354    6558 out.go:177] * Restarting existing qemu2 VM for "embed-certs-064000" ...
	I0925 04:27:22.372516    6558 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/embed-certs-064000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/embed-certs-064000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/embed-certs-064000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:56:eb:65:33:a3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/embed-certs-064000/disk.qcow2
	I0925 04:27:22.382218    6558 main.go:141] libmachine: STDOUT: 
	I0925 04:27:22.382286    6558 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 04:27:22.382416    6558 fix.go:56] fixHost completed within 27.357125ms
	I0925 04:27:22.382438    6558 start.go:83] releasing machines lock for "embed-certs-064000", held for 27.572292ms
	W0925 04:27:22.382463    6558 start.go:688] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0925 04:27:22.382682    6558 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 04:27:22.382701    6558 start.go:703] Will try again in 5 seconds ...
	I0925 04:27:27.385070    6558 start.go:365] acquiring machines lock for embed-certs-064000: {Name:mk918d99818e2cf21e5912bc291ed18d4b442ba6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 04:27:27.385529    6558 start.go:369] acquired machines lock for "embed-certs-064000" in 352.208µs
	I0925 04:27:27.385694    6558 start.go:96] Skipping create...Using existing machine configuration
	I0925 04:27:27.385716    6558 fix.go:54] fixHost starting: 
	I0925 04:27:27.386457    6558 fix.go:102] recreateIfNeeded on embed-certs-064000: state=Stopped err=<nil>
	W0925 04:27:27.386482    6558 fix.go:128] unexpected machine state, will restart: <nil>
	I0925 04:27:27.391293    6558 out.go:177] * Restarting existing qemu2 VM for "embed-certs-064000" ...
	I0925 04:27:27.395323    6558 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/embed-certs-064000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/embed-certs-064000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/embed-certs-064000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:56:eb:65:33:a3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/embed-certs-064000/disk.qcow2
	I0925 04:27:27.404548    6558 main.go:141] libmachine: STDOUT: 
	I0925 04:27:27.404602    6558 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 04:27:27.404817    6558 fix.go:56] fixHost completed within 19.03725ms
	I0925 04:27:27.404831    6558 start.go:83] releasing machines lock for "embed-certs-064000", held for 19.281667ms
	W0925 04:27:27.404989    6558 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-064000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-064000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 04:27:27.416076    6558 out.go:177] 
	W0925 04:27:27.420225    6558 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0925 04:27:27.420254    6558 out.go:239] * 
	* 
	W0925 04:27:27.422342    6558 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0925 04:27:27.432165    6558 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-064000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-064000 -n embed-certs-064000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-064000 -n embed-certs-064000: exit status 7 (47.057ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-064000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (6.94s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-064000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-064000 -n embed-certs-064000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-064000 -n embed-certs-064000: exit status 7 (31.985708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-064000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-064000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-064000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-064000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (29.347167ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-064000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-064000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-064000 -n embed-certs-064000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-064000 -n embed-certs-064000: exit status 7 (31.658625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-064000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p embed-certs-064000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p embed-certs-064000 "sudo crictl images -o json": exit status 89 (40.041416ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p embed-certs-064000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p embed-certs-064000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p embed-certs-064000"
start_stop_delete_test.go:304: v1.28.2 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.9-0",
- 	"registry.k8s.io/kube-apiserver:v1.28.2",
- 	"registry.k8s.io/kube-controller-manager:v1.28.2",
- 	"registry.k8s.io/kube-proxy:v1.28.2",
- 	"registry.k8s.io/kube-scheduler:v1.28.2",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-064000 -n embed-certs-064000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-064000 -n embed-certs-064000: exit status 7 (27.536875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-064000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-064000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-064000 --alsologtostderr -v=1: exit status 89 (39.70875ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p embed-certs-064000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 04:27:27.677927    6578 out.go:296] Setting OutFile to fd 1 ...
	I0925 04:27:27.678073    6578 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 04:27:27.678077    6578 out.go:309] Setting ErrFile to fd 2...
	I0925 04:27:27.678080    6578 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 04:27:27.678207    6578 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17297-1010/.minikube/bin
	I0925 04:27:27.678414    6578 out.go:303] Setting JSON to false
	I0925 04:27:27.678424    6578 mustload.go:65] Loading cluster: embed-certs-064000
	I0925 04:27:27.678633    6578 config.go:182] Loaded profile config "embed-certs-064000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0925 04:27:27.683133    6578 out.go:177] * The control plane node must be running for this command
	I0925 04:27:27.687190    6578 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-064000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-064000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-064000 -n embed-certs-064000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-064000 -n embed-certs-064000: exit status 7 (26.370666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-064000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-064000 -n embed-certs-064000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-064000 -n embed-certs-064000: exit status 7 (27.003916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-064000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (11.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-140000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.28.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-140000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.28.2: exit status 80 (11.23457675s)

                                                
                                                
-- stdout --
	* [newest-cni-140000] minikube v1.31.2 on Darwin 13.6 (arm64)
	  - MINIKUBE_LOCATION=17297
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17297-1010/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17297-1010/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node newest-cni-140000 in cluster newest-cni-140000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-140000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 04:27:28.129008    6607 out.go:296] Setting OutFile to fd 1 ...
	I0925 04:27:28.129128    6607 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 04:27:28.129131    6607 out.go:309] Setting ErrFile to fd 2...
	I0925 04:27:28.129134    6607 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 04:27:28.129281    6607 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17297-1010/.minikube/bin
	I0925 04:27:28.130363    6607 out.go:303] Setting JSON to false
	I0925 04:27:28.145629    6607 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3423,"bootTime":1695637825,"procs":415,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.6","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0925 04:27:28.145733    6607 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0925 04:27:28.150305    6607 out.go:177] * [newest-cni-140000] minikube v1.31.2 on Darwin 13.6 (arm64)
	I0925 04:27:28.157354    6607 out.go:177]   - MINIKUBE_LOCATION=17297
	I0925 04:27:28.161239    6607 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17297-1010/kubeconfig
	I0925 04:27:28.157416    6607 notify.go:220] Checking for updates...
	I0925 04:27:28.168304    6607 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0925 04:27:28.171266    6607 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 04:27:28.174312    6607 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17297-1010/.minikube
	I0925 04:27:28.177309    6607 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0925 04:27:28.180636    6607 config.go:182] Loaded profile config "default-k8s-diff-port-941000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0925 04:27:28.180700    6607 config.go:182] Loaded profile config "multinode-352000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0925 04:27:28.180739    6607 driver.go:373] Setting default libvirt URI to qemu:///system
	I0925 04:27:28.184228    6607 out.go:177] * Using the qemu2 driver based on user configuration
	I0925 04:27:28.191295    6607 start.go:298] selected driver: qemu2
	I0925 04:27:28.191307    6607 start.go:902] validating driver "qemu2" against <nil>
	I0925 04:27:28.191313    6607 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0925 04:27:28.193408    6607 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	W0925 04:27:28.193429    6607 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0925 04:27:28.201313    6607 out.go:177] * Automatically selected the socket_vmnet network
	I0925 04:27:28.204303    6607 start_flags.go:941] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0925 04:27:28.204321    6607 cni.go:84] Creating CNI manager for ""
	I0925 04:27:28.204328    6607 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0925 04:27:28.204333    6607 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0925 04:27:28.204338    6607 start_flags.go:321] config:
	{Name:newest-cni-140000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:newest-cni-140000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:d
ocker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/s
ocket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0925 04:27:28.208574    6607 iso.go:125] acquiring lock: {Name:mkf881a60cf9fd1672567914305ff6f7a4f13809 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 04:27:28.215160    6607 out.go:177] * Starting control plane node newest-cni-140000 in cluster newest-cni-140000
	I0925 04:27:28.219324    6607 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0925 04:27:28.219343    6607 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I0925 04:27:28.219357    6607 cache.go:57] Caching tarball of preloaded images
	I0925 04:27:28.219425    6607 preload.go:174] Found /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0925 04:27:28.219431    6607 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I0925 04:27:28.219502    6607 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/newest-cni-140000/config.json ...
	I0925 04:27:28.219520    6607 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/newest-cni-140000/config.json: {Name:mk02c9dab2f8bed76ae0c5ea5ce006e4ff45210d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 04:27:28.219736    6607 start.go:365] acquiring machines lock for newest-cni-140000: {Name:mk918d99818e2cf21e5912bc291ed18d4b442ba6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 04:27:29.681784    6607 start.go:369] acquired machines lock for "newest-cni-140000" in 1.461999542s
	I0925 04:27:29.681940    6607 start.go:93] Provisioning new machine with config: &{Name:newest-cni-140000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.2 ClusterName:newest-cni-140000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountS
tring:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 04:27:29.682178    6607 start.go:125] createHost starting for "" (driver="qemu2")
	I0925 04:27:29.691699    6607 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0925 04:27:29.739352    6607 start.go:159] libmachine.API.Create for "newest-cni-140000" (driver="qemu2")
	I0925 04:27:29.739396    6607 client.go:168] LocalClient.Create starting
	I0925 04:27:29.739527    6607 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/ca.pem
	I0925 04:27:29.739581    6607 main.go:141] libmachine: Decoding PEM data...
	I0925 04:27:29.739604    6607 main.go:141] libmachine: Parsing certificate...
	I0925 04:27:29.739676    6607 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/cert.pem
	I0925 04:27:29.739718    6607 main.go:141] libmachine: Decoding PEM data...
	I0925 04:27:29.739760    6607 main.go:141] libmachine: Parsing certificate...
	I0925 04:27:29.740394    6607 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17297-1010/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I0925 04:27:29.874515    6607 main.go:141] libmachine: Creating SSH key...
	I0925 04:27:29.956442    6607 main.go:141] libmachine: Creating Disk image...
	I0925 04:27:29.956455    6607 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0925 04:27:29.956621    6607 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/newest-cni-140000/disk.qcow2.raw /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/newest-cni-140000/disk.qcow2
	I0925 04:27:29.965655    6607 main.go:141] libmachine: STDOUT: 
	I0925 04:27:29.965671    6607 main.go:141] libmachine: STDERR: 
	I0925 04:27:29.965729    6607 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/newest-cni-140000/disk.qcow2 +20000M
	I0925 04:27:29.973788    6607 main.go:141] libmachine: STDOUT: Image resized.
	
	I0925 04:27:29.973805    6607 main.go:141] libmachine: STDERR: 
	I0925 04:27:29.973823    6607 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/newest-cni-140000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/newest-cni-140000/disk.qcow2
	I0925 04:27:29.973832    6607 main.go:141] libmachine: Starting QEMU VM...
	I0925 04:27:29.973875    6607 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/newest-cni-140000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/newest-cni-140000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/newest-cni-140000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:85:6d:13:8f:db -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/newest-cni-140000/disk.qcow2
	I0925 04:27:29.975594    6607 main.go:141] libmachine: STDOUT: 
	I0925 04:27:29.975608    6607 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 04:27:29.975628    6607 client.go:171] LocalClient.Create took 236.225458ms
	I0925 04:27:31.978022    6607 start.go:128] duration metric: createHost completed in 2.295734542s
	I0925 04:27:31.978118    6607 start.go:83] releasing machines lock for "newest-cni-140000", held for 2.296298042s
	W0925 04:27:31.978167    6607 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 04:27:31.996625    6607 out.go:177] * Deleting "newest-cni-140000" in qemu2 ...
	W0925 04:27:32.019164    6607 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 04:27:32.019190    6607 start.go:703] Will try again in 5 seconds ...
	I0925 04:27:37.021343    6607 start.go:365] acquiring machines lock for newest-cni-140000: {Name:mk918d99818e2cf21e5912bc291ed18d4b442ba6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 04:27:37.031770    6607 start.go:369] acquired machines lock for "newest-cni-140000" in 10.358041ms
	I0925 04:27:37.031824    6607 start.go:93] Provisioning new machine with config: &{Name:newest-cni-140000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.2 ClusterName:newest-cni-140000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountS
tring:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 04:27:37.032021    6607 start.go:125] createHost starting for "" (driver="qemu2")
	I0925 04:27:37.039255    6607 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0925 04:27:37.081585    6607 start.go:159] libmachine.API.Create for "newest-cni-140000" (driver="qemu2")
	I0925 04:27:37.081625    6607 client.go:168] LocalClient.Create starting
	I0925 04:27:37.081773    6607 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/ca.pem
	I0925 04:27:37.081833    6607 main.go:141] libmachine: Decoding PEM data...
	I0925 04:27:37.081849    6607 main.go:141] libmachine: Parsing certificate...
	I0925 04:27:37.081917    6607 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17297-1010/.minikube/certs/cert.pem
	I0925 04:27:37.081950    6607 main.go:141] libmachine: Decoding PEM data...
	I0925 04:27:37.081973    6607 main.go:141] libmachine: Parsing certificate...
	I0925 04:27:37.082465    6607 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17297-1010/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I0925 04:27:37.215342    6607 main.go:141] libmachine: Creating SSH key...
	I0925 04:27:37.279188    6607 main.go:141] libmachine: Creating Disk image...
	I0925 04:27:37.279198    6607 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0925 04:27:37.279367    6607 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/newest-cni-140000/disk.qcow2.raw /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/newest-cni-140000/disk.qcow2
	I0925 04:27:37.288462    6607 main.go:141] libmachine: STDOUT: 
	I0925 04:27:37.288488    6607 main.go:141] libmachine: STDERR: 
	I0925 04:27:37.288556    6607 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/newest-cni-140000/disk.qcow2 +20000M
	I0925 04:27:37.297031    6607 main.go:141] libmachine: STDOUT: Image resized.
	
	I0925 04:27:37.297050    6607 main.go:141] libmachine: STDERR: 
	I0925 04:27:37.297063    6607 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/newest-cni-140000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/newest-cni-140000/disk.qcow2
	I0925 04:27:37.297072    6607 main.go:141] libmachine: Starting QEMU VM...
	I0925 04:27:37.297116    6607 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/newest-cni-140000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/newest-cni-140000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/newest-cni-140000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:8a:f1:41:5d:b1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/newest-cni-140000/disk.qcow2
	I0925 04:27:37.298912    6607 main.go:141] libmachine: STDOUT: 
	I0925 04:27:37.298932    6607 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 04:27:37.298948    6607 client.go:171] LocalClient.Create took 217.314375ms
	I0925 04:27:39.301151    6607 start.go:128] duration metric: createHost completed in 2.269094875s
	I0925 04:27:39.301235    6607 start.go:83] releasing machines lock for "newest-cni-140000", held for 2.269440708s
	W0925 04:27:39.301595    6607 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-140000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-140000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 04:27:39.307751    6607 out.go:177] 
	W0925 04:27:39.315741    6607 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0925 04:27:39.315769    6607 out.go:239] * 
	* 
	W0925 04:27:39.318524    6607 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0925 04:27:39.326715    6607 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-140000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.28.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-140000 -n newest-cni-140000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-140000 -n newest-cni-140000: exit status 7 (66.415667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-140000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (11.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-941000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-941000 create -f testdata/busybox.yaml: exit status 1 (30.956417ms)

                                                
                                                
** stderr ** 
	error: no openapi getter

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-941000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-941000 -n default-k8s-diff-port-941000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-941000 -n default-k8s-diff-port-941000: exit status 7 (31.738958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-941000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-941000 -n default-k8s-diff-port-941000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-941000 -n default-k8s-diff-port-941000: exit status 7 (31.476125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-941000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-941000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-941000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-941000 describe deploy/metrics-server -n kube-system: exit status 1 (26.78075ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-941000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-941000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-941000 -n default-k8s-diff-port-941000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-941000 -n default-k8s-diff-port-941000: exit status 7 (27.453ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-941000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (6.99s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-941000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-941000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.2: exit status 80 (6.946588792s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-941000] minikube v1.31.2 on Darwin 13.6 (arm64)
	  - MINIKUBE_LOCATION=17297
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17297-1010/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17297-1010/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node default-k8s-diff-port-941000 in cluster default-k8s-diff-port-941000
	* Restarting existing qemu2 VM for "default-k8s-diff-port-941000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-941000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 04:27:30.148580    6635 out.go:296] Setting OutFile to fd 1 ...
	I0925 04:27:30.148692    6635 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 04:27:30.148694    6635 out.go:309] Setting ErrFile to fd 2...
	I0925 04:27:30.148697    6635 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 04:27:30.148844    6635 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17297-1010/.minikube/bin
	I0925 04:27:30.149809    6635 out.go:303] Setting JSON to false
	I0925 04:27:30.165060    6635 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3425,"bootTime":1695637825,"procs":415,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.6","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0925 04:27:30.165148    6635 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0925 04:27:30.169621    6635 out.go:177] * [default-k8s-diff-port-941000] minikube v1.31.2 on Darwin 13.6 (arm64)
	I0925 04:27:30.176621    6635 out.go:177]   - MINIKUBE_LOCATION=17297
	I0925 04:27:30.180538    6635 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17297-1010/kubeconfig
	I0925 04:27:30.176683    6635 notify.go:220] Checking for updates...
	I0925 04:27:30.186595    6635 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0925 04:27:30.189527    6635 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 04:27:30.192572    6635 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17297-1010/.minikube
	I0925 04:27:30.195588    6635 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0925 04:27:30.198841    6635 config.go:182] Loaded profile config "default-k8s-diff-port-941000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0925 04:27:30.199103    6635 driver.go:373] Setting default libvirt URI to qemu:///system
	I0925 04:27:30.203580    6635 out.go:177] * Using the qemu2 driver based on existing profile
	I0925 04:27:30.210581    6635 start.go:298] selected driver: qemu2
	I0925 04:27:30.210589    6635 start.go:902] validating driver "qemu2" against &{Name:default-k8s-diff-port-941000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:default-k8s-diff-port-941000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subne
t: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0925 04:27:30.210661    6635 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0925 04:27:30.212766    6635 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0925 04:27:30.212795    6635 cni.go:84] Creating CNI manager for ""
	I0925 04:27:30.212802    6635 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0925 04:27:30.212809    6635 start_flags.go:321] config:
	{Name:default-k8s-diff-port-941000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:default-k8s-diff-port-9410
00 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0925 04:27:30.217032    6635 iso.go:125] acquiring lock: {Name:mkf881a60cf9fd1672567914305ff6f7a4f13809 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 04:27:30.224578    6635 out.go:177] * Starting control plane node default-k8s-diff-port-941000 in cluster default-k8s-diff-port-941000
	I0925 04:27:30.228604    6635 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0925 04:27:30.228624    6635 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I0925 04:27:30.228640    6635 cache.go:57] Caching tarball of preloaded images
	I0925 04:27:30.228716    6635 preload.go:174] Found /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0925 04:27:30.228731    6635 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I0925 04:27:30.228806    6635 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/default-k8s-diff-port-941000/config.json ...
	I0925 04:27:30.229191    6635 start.go:365] acquiring machines lock for default-k8s-diff-port-941000: {Name:mk918d99818e2cf21e5912bc291ed18d4b442ba6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 04:27:31.978260    6635 start.go:369] acquired machines lock for "default-k8s-diff-port-941000" in 1.749045291s
	I0925 04:27:31.978415    6635 start.go:96] Skipping create...Using existing machine configuration
	I0925 04:27:31.978451    6635 fix.go:54] fixHost starting: 
	I0925 04:27:31.979129    6635 fix.go:102] recreateIfNeeded on default-k8s-diff-port-941000: state=Stopped err=<nil>
	W0925 04:27:31.979176    6635 fix.go:128] unexpected machine state, will restart: <nil>
	I0925 04:27:31.988595    6635 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-941000" ...
	I0925 04:27:32.000884    6635 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/default-k8s-diff-port-941000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/default-k8s-diff-port-941000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/default-k8s-diff-port-941000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:39:dc:53:9b:c5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/default-k8s-diff-port-941000/disk.qcow2
	I0925 04:27:32.010227    6635 main.go:141] libmachine: STDOUT: 
	I0925 04:27:32.010286    6635 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 04:27:32.010404    6635 fix.go:56] fixHost completed within 31.958125ms
	I0925 04:27:32.010422    6635 start.go:83] releasing machines lock for "default-k8s-diff-port-941000", held for 32.123583ms
	W0925 04:27:32.010448    6635 start.go:688] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0925 04:27:32.010760    6635 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 04:27:32.010781    6635 start.go:703] Will try again in 5 seconds ...
	I0925 04:27:37.013032    6635 start.go:365] acquiring machines lock for default-k8s-diff-port-941000: {Name:mk918d99818e2cf21e5912bc291ed18d4b442ba6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 04:27:37.013595    6635 start.go:369] acquired machines lock for "default-k8s-diff-port-941000" in 448.5µs
	I0925 04:27:37.013740    6635 start.go:96] Skipping create...Using existing machine configuration
	I0925 04:27:37.013763    6635 fix.go:54] fixHost starting: 
	I0925 04:27:37.014530    6635 fix.go:102] recreateIfNeeded on default-k8s-diff-port-941000: state=Stopped err=<nil>
	W0925 04:27:37.014558    6635 fix.go:128] unexpected machine state, will restart: <nil>
	I0925 04:27:37.019408    6635 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-941000" ...
	I0925 04:27:37.022532    6635 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/default-k8s-diff-port-941000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/default-k8s-diff-port-941000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/default-k8s-diff-port-941000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:39:dc:53:9b:c5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/default-k8s-diff-port-941000/disk.qcow2
	I0925 04:27:37.031560    6635 main.go:141] libmachine: STDOUT: 
	I0925 04:27:37.031615    6635 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 04:27:37.031693    6635 fix.go:56] fixHost completed within 17.932417ms
	I0925 04:27:37.031712    6635 start.go:83] releasing machines lock for "default-k8s-diff-port-941000", held for 18.094875ms
	W0925 04:27:37.031887    6635 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-941000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-941000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 04:27:37.045255    6635 out.go:177] 
	W0925 04:27:37.049258    6635 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0925 04:27:37.049278    6635 out.go:239] * 
	* 
	W0925 04:27:37.051275    6635 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0925 04:27:37.060010    6635 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-941000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-941000 -n default-k8s-diff-port-941000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-941000 -n default-k8s-diff-port-941000: exit status 7 (46.762125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-941000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (6.99s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-941000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-941000 -n default-k8s-diff-port-941000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-941000 -n default-k8s-diff-port-941000: exit status 7 (32.528416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-941000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-941000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-941000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-941000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (29.380791ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-941000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-941000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-941000 -n default-k8s-diff-port-941000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-941000 -n default-k8s-diff-port-941000: exit status 7 (31.316292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-941000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p default-k8s-diff-port-941000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p default-k8s-diff-port-941000 "sudo crictl images -o json": exit status 89 (41.697292ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-941000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p default-k8s-diff-port-941000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p default-k8s-diff-port-941000"
start_stop_delete_test.go:304: v1.28.2 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.9-0",
- 	"registry.k8s.io/kube-apiserver:v1.28.2",
- 	"registry.k8s.io/kube-controller-manager:v1.28.2",
- 	"registry.k8s.io/kube-proxy:v1.28.2",
- 	"registry.k8s.io/kube-scheduler:v1.28.2",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-941000 -n default-k8s-diff-port-941000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-941000 -n default-k8s-diff-port-941000: exit status 7 (27.886ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-941000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-941000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-941000 --alsologtostderr -v=1: exit status 89 (37.742375ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-941000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 04:27:37.307105    6656 out.go:296] Setting OutFile to fd 1 ...
	I0925 04:27:37.307242    6656 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 04:27:37.307245    6656 out.go:309] Setting ErrFile to fd 2...
	I0925 04:27:37.307248    6656 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 04:27:37.307370    6656 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17297-1010/.minikube/bin
	I0925 04:27:37.307601    6656 out.go:303] Setting JSON to false
	I0925 04:27:37.307609    6656 mustload.go:65] Loading cluster: default-k8s-diff-port-941000
	I0925 04:27:37.307812    6656 config.go:182] Loaded profile config "default-k8s-diff-port-941000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0925 04:27:37.311320    6656 out.go:177] * The control plane node must be running for this command
	I0925 04:27:37.315327    6656 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-941000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-941000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-941000 -n default-k8s-diff-port-941000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-941000 -n default-k8s-diff-port-941000: exit status 7 (26.526417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-941000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-941000 -n default-k8s-diff-port-941000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-941000 -n default-k8s-diff-port-941000: exit status 7 (26.9755ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-941000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-140000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.28.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-140000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.28.2: exit status 80 (5.17741175s)

                                                
                                                
-- stdout --
	* [newest-cni-140000] minikube v1.31.2 on Darwin 13.6 (arm64)
	  - MINIKUBE_LOCATION=17297
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17297-1010/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17297-1010/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node newest-cni-140000 in cluster newest-cni-140000
	* Restarting existing qemu2 VM for "newest-cni-140000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-140000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 04:27:39.646390    6691 out.go:296] Setting OutFile to fd 1 ...
	I0925 04:27:39.646523    6691 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 04:27:39.646526    6691 out.go:309] Setting ErrFile to fd 2...
	I0925 04:27:39.646528    6691 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 04:27:39.646663    6691 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17297-1010/.minikube/bin
	I0925 04:27:39.647594    6691 out.go:303] Setting JSON to false
	I0925 04:27:39.662756    6691 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3434,"bootTime":1695637825,"procs":414,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.6","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0925 04:27:39.662834    6691 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0925 04:27:39.666622    6691 out.go:177] * [newest-cni-140000] minikube v1.31.2 on Darwin 13.6 (arm64)
	I0925 04:27:39.673611    6691 out.go:177]   - MINIKUBE_LOCATION=17297
	I0925 04:27:39.677506    6691 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17297-1010/kubeconfig
	I0925 04:27:39.673683    6691 notify.go:220] Checking for updates...
	I0925 04:27:39.683612    6691 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0925 04:27:39.686591    6691 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 04:27:39.689613    6691 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17297-1010/.minikube
	I0925 04:27:39.692627    6691 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0925 04:27:39.695945    6691 config.go:182] Loaded profile config "newest-cni-140000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0925 04:27:39.696193    6691 driver.go:373] Setting default libvirt URI to qemu:///system
	I0925 04:27:39.700560    6691 out.go:177] * Using the qemu2 driver based on existing profile
	I0925 04:27:39.706466    6691 start.go:298] selected driver: qemu2
	I0925 04:27:39.706473    6691 start.go:902] validating driver "qemu2" against &{Name:newest-cni-140000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.2 ClusterName:newest-cni-140000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<n
il> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0925 04:27:39.706514    6691 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0925 04:27:39.708561    6691 start_flags.go:941] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0925 04:27:39.708584    6691 cni.go:84] Creating CNI manager for ""
	I0925 04:27:39.708591    6691 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0925 04:27:39.708596    6691 start_flags.go:321] config:
	{Name:newest-cni-140000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:newest-cni-140000 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeReques
ted:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0925 04:27:39.712624    6691 iso.go:125] acquiring lock: {Name:mkf881a60cf9fd1672567914305ff6f7a4f13809 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 04:27:39.719587    6691 out.go:177] * Starting control plane node newest-cni-140000 in cluster newest-cni-140000
	I0925 04:27:39.723575    6691 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0925 04:27:39.723592    6691 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I0925 04:27:39.723602    6691 cache.go:57] Caching tarball of preloaded images
	I0925 04:27:39.723655    6691 preload.go:174] Found /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0925 04:27:39.723660    6691 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I0925 04:27:39.723721    6691 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/newest-cni-140000/config.json ...
	I0925 04:27:39.724070    6691 start.go:365] acquiring machines lock for newest-cni-140000: {Name:mk918d99818e2cf21e5912bc291ed18d4b442ba6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 04:27:39.724100    6691 start.go:369] acquired machines lock for "newest-cni-140000" in 23.708µs
	I0925 04:27:39.724109    6691 start.go:96] Skipping create...Using existing machine configuration
	I0925 04:27:39.724114    6691 fix.go:54] fixHost starting: 
	I0925 04:27:39.724236    6691 fix.go:102] recreateIfNeeded on newest-cni-140000: state=Stopped err=<nil>
	W0925 04:27:39.724244    6691 fix.go:128] unexpected machine state, will restart: <nil>
	I0925 04:27:39.728406    6691 out.go:177] * Restarting existing qemu2 VM for "newest-cni-140000" ...
	I0925 04:27:39.736598    6691 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/newest-cni-140000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/newest-cni-140000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/newest-cni-140000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:8a:f1:41:5d:b1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/newest-cni-140000/disk.qcow2
	I0925 04:27:39.738493    6691 main.go:141] libmachine: STDOUT: 
	I0925 04:27:39.738514    6691 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 04:27:39.738542    6691 fix.go:56] fixHost completed within 14.427541ms
	I0925 04:27:39.738547    6691 start.go:83] releasing machines lock for "newest-cni-140000", held for 14.444042ms
	W0925 04:27:39.738552    6691 start.go:688] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0925 04:27:39.738584    6691 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 04:27:39.738589    6691 start.go:703] Will try again in 5 seconds ...
	I0925 04:27:44.740815    6691 start.go:365] acquiring machines lock for newest-cni-140000: {Name:mk918d99818e2cf21e5912bc291ed18d4b442ba6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 04:27:44.741314    6691 start.go:369] acquired machines lock for "newest-cni-140000" in 367.291µs
	I0925 04:27:44.741454    6691 start.go:96] Skipping create...Using existing machine configuration
	I0925 04:27:44.741475    6691 fix.go:54] fixHost starting: 
	I0925 04:27:44.742211    6691 fix.go:102] recreateIfNeeded on newest-cni-140000: state=Stopped err=<nil>
	W0925 04:27:44.742238    6691 fix.go:128] unexpected machine state, will restart: <nil>
	I0925 04:27:44.750664    6691 out.go:177] * Restarting existing qemu2 VM for "newest-cni-140000" ...
	I0925 04:27:44.755801    6691 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/newest-cni-140000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/newest-cni-140000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/newest-cni-140000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:8a:f1:41:5d:b1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/newest-cni-140000/disk.qcow2
	I0925 04:27:44.764987    6691 main.go:141] libmachine: STDOUT: 
	I0925 04:27:44.765033    6691 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0925 04:27:44.765119    6691 fix.go:56] fixHost completed within 23.644ms
	I0925 04:27:44.765139    6691 start.go:83] releasing machines lock for "newest-cni-140000", held for 23.804041ms
	W0925 04:27:44.765330    6691 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-140000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-140000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0925 04:27:44.771604    6691 out.go:177] 
	W0925 04:27:44.775827    6691 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0925 04:27:44.775868    6691 out.go:239] * 
	* 
	W0925 04:27:44.778486    6691 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0925 04:27:44.786661    6691 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-140000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.28.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-140000 -n newest-cni-140000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-140000 -n newest-cni-140000: exit status 7 (66.442958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-140000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p newest-cni-140000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p newest-cni-140000 "sudo crictl images -o json": exit status 89 (49.229083ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p newest-cni-140000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p newest-cni-140000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p newest-cni-140000"
start_stop_delete_test.go:304: v1.28.2 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.9-0",
- 	"registry.k8s.io/kube-apiserver:v1.28.2",
- 	"registry.k8s.io/kube-controller-manager:v1.28.2",
- 	"registry.k8s.io/kube-proxy:v1.28.2",
- 	"registry.k8s.io/kube-scheduler:v1.28.2",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-140000 -n newest-cni-140000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-140000 -n newest-cni-140000: exit status 7 (27.910208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-140000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-140000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-140000 --alsologtostderr -v=1: exit status 89 (39.675166ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p newest-cni-140000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 04:27:44.969723    6705 out.go:296] Setting OutFile to fd 1 ...
	I0925 04:27:44.969883    6705 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 04:27:44.969886    6705 out.go:309] Setting ErrFile to fd 2...
	I0925 04:27:44.969889    6705 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 04:27:44.970034    6705 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17297-1010/.minikube/bin
	I0925 04:27:44.970258    6705 out.go:303] Setting JSON to false
	I0925 04:27:44.970267    6705 mustload.go:65] Loading cluster: newest-cni-140000
	I0925 04:27:44.970466    6705 config.go:182] Loaded profile config "newest-cni-140000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0925 04:27:44.974804    6705 out.go:177] * The control plane node must be running for this command
	I0925 04:27:44.978703    6705 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-140000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-140000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-140000 -n newest-cni-140000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-140000 -n newest-cni-140000: exit status 7 (27.812333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-140000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-140000 -n newest-cni-140000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-140000 -n newest-cni-140000: exit status 7 (27.566667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-140000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.10s)

                                                
                                    

Test pass (140/255)

Order passed test Duration
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.09
10 TestDownloadOnly/v1.28.2/json-events 9.18
11 TestDownloadOnly/v1.28.2/preload-exists 0
14 TestDownloadOnly/v1.28.2/kubectl 0
15 TestDownloadOnly/v1.28.2/LogsDuration 0.07
16 TestDownloadOnly/DeleteAll 0.24
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.23
19 TestBinaryMirror 0.38
22 TestAddons/Setup 403.7
26 TestAddons/parallel/InspektorGadget 10.26
31 TestAddons/parallel/Headlamp 12.42
35 TestAddons/serial/GCPAuth/Namespaces 0.07
36 TestAddons/StoppedEnableDisable 12.27
44 TestHyperKitDriverInstallOrUpdate 8.45
47 TestErrorSpam/setup 31.73
48 TestErrorSpam/start 0.34
49 TestErrorSpam/status 0.26
50 TestErrorSpam/pause 0.63
51 TestErrorSpam/unpause 0.58
52 TestErrorSpam/stop 3.23
55 TestFunctional/serial/CopySyncFile 0
56 TestFunctional/serial/StartWithProxy 45.4
57 TestFunctional/serial/AuditLog 0
58 TestFunctional/serial/SoftStart 33.59
59 TestFunctional/serial/KubeContext 0.03
60 TestFunctional/serial/KubectlGetPods 0.05
63 TestFunctional/serial/CacheCmd/cache/add_remote 3.4
64 TestFunctional/serial/CacheCmd/cache/add_local 1.31
65 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.03
66 TestFunctional/serial/CacheCmd/cache/list 0.03
67 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.09
68 TestFunctional/serial/CacheCmd/cache/cache_reload 0.93
69 TestFunctional/serial/CacheCmd/cache/delete 0.07
70 TestFunctional/serial/MinikubeKubectlCmd 0.45
71 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.53
72 TestFunctional/serial/ExtraConfig 36.68
73 TestFunctional/serial/ComponentHealth 0.04
74 TestFunctional/serial/LogsCmd 0.62
75 TestFunctional/serial/LogsFileCmd 0.59
76 TestFunctional/serial/InvalidService 4.26
78 TestFunctional/parallel/ConfigCmd 0.2
79 TestFunctional/parallel/DashboardCmd 7.91
80 TestFunctional/parallel/DryRun 0.22
81 TestFunctional/parallel/InternationalLanguage 0.11
82 TestFunctional/parallel/StatusCmd 0.27
87 TestFunctional/parallel/AddonsCmd 0.12
90 TestFunctional/parallel/SSHCmd 0.15
91 TestFunctional/parallel/CpCmd 0.31
93 TestFunctional/parallel/FileSync 0.08
94 TestFunctional/parallel/CertSync 0.46
98 TestFunctional/parallel/NodeLabels 0.04
100 TestFunctional/parallel/NonActiveRuntimeDisabled 0.07
102 TestFunctional/parallel/License 0.2
104 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.23
105 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
107 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.12
108 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.04
109 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
110 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.02
111 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
112 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
113 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
114 TestFunctional/parallel/ServiceCmd/DeployApp 7.09
115 TestFunctional/parallel/ServiceCmd/List 0.29
116 TestFunctional/parallel/ServiceCmd/JSONOutput 0.29
117 TestFunctional/parallel/ServiceCmd/HTTPS 0.11
118 TestFunctional/parallel/ServiceCmd/Format 0.11
119 TestFunctional/parallel/ServiceCmd/URL 0.12
120 TestFunctional/parallel/ProfileCmd/profile_not_create 0.19
121 TestFunctional/parallel/ProfileCmd/profile_list 0.15
122 TestFunctional/parallel/ProfileCmd/profile_json_output 0.15
123 TestFunctional/parallel/MountCmd/any-port 5.42
126 TestFunctional/parallel/Version/short 0.04
127 TestFunctional/parallel/Version/components 0.18
128 TestFunctional/parallel/ImageCommands/ImageListShort 0.08
129 TestFunctional/parallel/ImageCommands/ImageListTable 0.09
130 TestFunctional/parallel/ImageCommands/ImageListJson 0.08
131 TestFunctional/parallel/ImageCommands/ImageListYaml 0.08
132 TestFunctional/parallel/ImageCommands/ImageBuild 1.53
133 TestFunctional/parallel/ImageCommands/Setup 1.68
134 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 2.07
135 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.51
136 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 3.56
137 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.53
138 TestFunctional/parallel/ImageCommands/ImageRemove 0.18
139 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.63
140 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.64
141 TestFunctional/parallel/DockerEnv/bash 0.39
142 TestFunctional/parallel/UpdateContextCmd/no_changes 0.05
143 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.05
144 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.05
145 TestFunctional/delete_addon-resizer_images 0.11
146 TestFunctional/delete_my-image_image 0.04
147 TestFunctional/delete_minikube_cached_images 0.04
151 TestImageBuild/serial/Setup 32.22
152 TestImageBuild/serial/NormalBuild 0.98
154 TestImageBuild/serial/BuildWithDockerIgnore 0.12
155 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.11
158 TestIngressAddonLegacy/StartLegacyK8sCluster 63.08
160 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 13.34
161 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.19
165 TestJSONOutput/start/Command 41.83
166 TestJSONOutput/start/Audit 0
168 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
169 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
171 TestJSONOutput/pause/Command 0.28
172 TestJSONOutput/pause/Audit 0
174 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
175 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
177 TestJSONOutput/unpause/Command 0.23
178 TestJSONOutput/unpause/Audit 0
180 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
181 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
183 TestJSONOutput/stop/Command 12.07
184 TestJSONOutput/stop/Audit 0
186 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
188 TestErrorJSONOutput 0.31
193 TestMainNoArgs 0.03
194 TestMinikubeProfile 63.35
250 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
254 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
255 TestNoKubernetes/serial/ProfileList 0.14
256 TestNoKubernetes/serial/Stop 0.06
258 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.04
276 TestStartStop/group/old-k8s-version/serial/Stop 0.06
277 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.08
287 TestStartStop/group/no-preload/serial/Stop 0.06
288 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.09
298 TestStartStop/group/embed-certs/serial/Stop 0.06
299 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.09
309 TestStartStop/group/default-k8s-diff-port/serial/Stop 0.06
310 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.08
316 TestStartStop/group/newest-cni/serial/DeployApp 0
317 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
318 TestStartStop/group/newest-cni/serial/Stop 0.06
319 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.09
321 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
322 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-427000
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-427000: exit status 85 (92.115791ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-427000 | jenkins | v1.31.2 | 25 Sep 23 03:33 PDT |          |
	|         | -p download-only-427000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/25 03:33:20
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.21.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0925 03:33:20.150406    1471 out.go:296] Setting OutFile to fd 1 ...
	I0925 03:33:20.150568    1471 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 03:33:20.150575    1471 out.go:309] Setting ErrFile to fd 2...
	I0925 03:33:20.150578    1471 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 03:33:20.150701    1471 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17297-1010/.minikube/bin
	W0925 03:33:20.150779    1471 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/17297-1010/.minikube/config/config.json: open /Users/jenkins/minikube-integration/17297-1010/.minikube/config/config.json: no such file or directory
	I0925 03:33:20.151871    1471 out.go:303] Setting JSON to true
	I0925 03:33:20.168318    1471 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":175,"bootTime":1695637825,"procs":397,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.6","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0925 03:33:20.168401    1471 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0925 03:33:20.175846    1471 out.go:97] [download-only-427000] minikube v1.31.2 on Darwin 13.6 (arm64)
	I0925 03:33:20.181840    1471 out.go:169] MINIKUBE_LOCATION=17297
	W0925 03:33:20.176014    1471 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/preloaded-tarball: no such file or directory
	I0925 03:33:20.176072    1471 notify.go:220] Checking for updates...
	I0925 03:33:20.192666    1471 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/17297-1010/kubeconfig
	I0925 03:33:20.196813    1471 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0925 03:33:20.199858    1471 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 03:33:20.201302    1471 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/17297-1010/.minikube
	W0925 03:33:20.207799    1471 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0925 03:33:20.207991    1471 driver.go:373] Setting default libvirt URI to qemu:///system
	I0925 03:33:20.212801    1471 out.go:97] Using the qemu2 driver based on user configuration
	I0925 03:33:20.212820    1471 start.go:298] selected driver: qemu2
	I0925 03:33:20.212834    1471 start.go:902] validating driver "qemu2" against <nil>
	I0925 03:33:20.212902    1471 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0925 03:33:20.216761    1471 out.go:169] Automatically selected the socket_vmnet network
	I0925 03:33:20.222426    1471 start_flags.go:384] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0925 03:33:20.222512    1471 start_flags.go:904] Wait components to verify : map[apiserver:true system_pods:true]
	I0925 03:33:20.222569    1471 cni.go:84] Creating CNI manager for ""
	I0925 03:33:20.222586    1471 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0925 03:33:20.222591    1471 start_flags.go:321] config:
	{Name:download-only-427000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-427000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRun
time:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0925 03:33:20.228273    1471 iso.go:125] acquiring lock: {Name:mkf881a60cf9fd1672567914305ff6f7a4f13809 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 03:33:20.232654    1471 out.go:97] Downloading VM boot image ...
	I0925 03:33:20.232673    1471 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso
	I0925 03:33:25.555005    1471 out.go:97] Starting control plane node download-only-427000 in cluster download-only-427000
	I0925 03:33:25.555031    1471 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0925 03:33:25.614606    1471 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0925 03:33:25.614644    1471 cache.go:57] Caching tarball of preloaded images
	I0925 03:33:25.614805    1471 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0925 03:33:25.619935    1471 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0925 03:33:25.619942    1471 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0925 03:33:25.697617    1471 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4?checksum=md5:a000baffb0664b293d602f95ed25caa6 -> /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0925 03:33:31.703423    1471 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0925 03:33:31.703562    1471 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0925 03:33:32.343425    1471 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0925 03:33:32.343613    1471 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/download-only-427000/config.json ...
	I0925 03:33:32.343634    1471 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/download-only-427000/config.json: {Name:mk73556e20767bba9803568dbbfd5b8f39da6dad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 03:33:32.343852    1471 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0925 03:33:32.344010    1471 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 -> /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/darwin/arm64/v1.16.0/kubectl
	I0925 03:33:32.582962    1471 out.go:169] 
	W0925 03:33:32.588065    1471 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 Dst:/Users/jenkins/minikube-integration/17297-1010/.minikube/cache/darwin/arm64/v1.16.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x103ce5800 0x103ce5800 0x103ce5800 0x103ce5800 0x103ce5800 0x103ce5800 0x103ce5800] Decompressors:map[bz2:0x14000512dd0 gz:0x14000512dd8 tar:0x14000512d70 tar.bz2:0x14000512d90 tar.gz:0x14000512da0 tar.xz:0x14000512db0 tar.zst:0x14000512dc0 tbz2:0x14000512d90 tgz:0x14000512da0 txz:0x14000512db0 tzst:0x14000512dc0 xz:0x14000512de0 zip:0x14000512df0 zst:0x14000512de8] Getters:map[file:0x14000062710 http:0x1400017e640 https:0x1400017e690] Dir:false ProgressListener
:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0925 03:33:32.588091    1471 out_reason.go:110] 
	W0925 03:33:32.593919    1471 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0925 03:33:32.598026    1471 out.go:169] 
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-427000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/json-events (9.18s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-427000 --force --alsologtostderr --kubernetes-version=v1.28.2 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-427000 --force --alsologtostderr --kubernetes-version=v1.28.2 --container-runtime=docker --driver=qemu2 : (9.179558833s)
--- PASS: TestDownloadOnly/v1.28.2/json-events (9.18s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/preload-exists
--- PASS: TestDownloadOnly/v1.28.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/kubectl
--- PASS: TestDownloadOnly/v1.28.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-427000
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-427000: exit status 85 (73.377833ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-427000 | jenkins | v1.31.2 | 25 Sep 23 03:33 PDT |          |
	|         | -p download-only-427000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-427000 | jenkins | v1.31.2 | 25 Sep 23 03:33 PDT |          |
	|         | -p download-only-427000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.2   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/25 03:33:32
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.21.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0925 03:33:32.779459    1491 out.go:296] Setting OutFile to fd 1 ...
	I0925 03:33:32.779592    1491 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 03:33:32.779595    1491 out.go:309] Setting ErrFile to fd 2...
	I0925 03:33:32.779597    1491 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 03:33:32.779733    1491 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17297-1010/.minikube/bin
	W0925 03:33:32.779801    1491 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/17297-1010/.minikube/config/config.json: open /Users/jenkins/minikube-integration/17297-1010/.minikube/config/config.json: no such file or directory
	I0925 03:33:32.780661    1491 out.go:303] Setting JSON to true
	I0925 03:33:32.795878    1491 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":187,"bootTime":1695637825,"procs":387,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.6","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0925 03:33:32.795936    1491 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0925 03:33:32.800427    1491 out.go:97] [download-only-427000] minikube v1.31.2 on Darwin 13.6 (arm64)
	I0925 03:33:32.804587    1491 out.go:169] MINIKUBE_LOCATION=17297
	I0925 03:33:32.800522    1491 notify.go:220] Checking for updates...
	I0925 03:33:32.810503    1491 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/17297-1010/kubeconfig
	I0925 03:33:32.813581    1491 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0925 03:33:32.816610    1491 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 03:33:32.819552    1491 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/17297-1010/.minikube
	W0925 03:33:32.825595    1491 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0925 03:33:32.825881    1491 config.go:182] Loaded profile config "download-only-427000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W0925 03:33:32.825917    1491 start.go:810] api.Load failed for download-only-427000: filestore "download-only-427000": Docker machine "download-only-427000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0925 03:33:32.825970    1491 driver.go:373] Setting default libvirt URI to qemu:///system
	W0925 03:33:32.825990    1491 start.go:810] api.Load failed for download-only-427000: filestore "download-only-427000": Docker machine "download-only-427000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0925 03:33:32.827412    1491 out.go:97] Using the qemu2 driver based on existing profile
	I0925 03:33:32.827421    1491 start.go:298] selected driver: qemu2
	I0925 03:33:32.827424    1491 start.go:902] validating driver "qemu2" against &{Name:download-only-427000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-427000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binar
yMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0925 03:33:32.829437    1491 cni.go:84] Creating CNI manager for ""
	I0925 03:33:32.829448    1491 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0925 03:33:32.829456    1491 start_flags.go:321] config:
	{Name:download-only-427000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:download-only-427000 Namespace:def
ault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath
: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0925 03:33:32.833427    1491 iso.go:125] acquiring lock: {Name:mkf881a60cf9fd1672567914305ff6f7a4f13809 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 03:33:32.836580    1491 out.go:97] Starting control plane node download-only-427000 in cluster download-only-427000
	I0925 03:33:32.836589    1491 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0925 03:33:32.894181    1491 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.2/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I0925 03:33:32.894195    1491 cache.go:57] Caching tarball of preloaded images
	I0925 03:33:32.894365    1491 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0925 03:33:32.899538    1491 out.go:97] Downloading Kubernetes v1.28.2 preload ...
	I0925 03:33:32.899545    1491 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 ...
	I0925 03:33:32.973003    1491 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.2/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4?checksum=md5:48f32a2a1ca4194a6d2a21c3ded2b2db -> /Users/jenkins/minikube-integration/17297-1010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-427000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.2/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:187: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-427000
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.23s)

                                                
                                    
x
+
TestBinaryMirror (0.38s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-317000 --alsologtostderr --binary-mirror http://127.0.0.1:49310 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-317000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-317000
--- PASS: TestBinaryMirror (0.38s)

                                                
                                    
x
+
TestAddons/Setup (403.7s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:88: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-183000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:88: (dbg) Done: out/minikube-darwin-arm64 start -p addons-183000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=qemu2  --addons=ingress --addons=ingress-dns: (6m43.698458667s)
--- PASS: TestAddons/Setup (403.70s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.26s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-dmqnx" [94a4278a-2b52-4344-a640-8a01a54306c2] Running
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.006946292s
addons_test.go:817: (dbg) Run:  out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-183000
addons_test.go:817: (dbg) Done: out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-183000: (5.253024084s)
--- PASS: TestAddons/parallel/InspektorGadget (10.26s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (12.42s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:800: (dbg) Run:  out/minikube-darwin-arm64 addons enable headlamp -p addons-183000 --alsologtostderr -v=1
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-58b88cff49-kdgv2" [f0f974cd-0799-485f-984a-d6be7c88ad59] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-58b88cff49-kdgv2" [f0f974cd-0799-485f-984a-d6be7c88ad59] Running
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.007617917s
--- PASS: TestAddons/parallel/Headlamp (12.42s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.07s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:626: (dbg) Run:  kubectl --context addons-183000 create ns new-namespace
addons_test.go:640: (dbg) Run:  kubectl --context addons-183000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.07s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.27s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:148: (dbg) Run:  out/minikube-darwin-arm64 stop -p addons-183000
addons_test.go:148: (dbg) Done: out/minikube-darwin-arm64 stop -p addons-183000: (12.077287583s)
addons_test.go:152: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-183000
addons_test.go:156: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-183000
addons_test.go:161: (dbg) Run:  out/minikube-darwin-arm64 addons disable gvisor -p addons-183000
--- PASS: TestAddons/StoppedEnableDisable (12.27s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (8.45s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (8.45s)

                                                
                                    
x
+
TestErrorSpam/setup (31.73s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-078000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-078000 --driver=qemu2 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -p nospam-078000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-078000 --driver=qemu2 : (31.72502125s)
--- PASS: TestErrorSpam/setup (31.73s)

                                                
                                    
x
+
TestErrorSpam/start (0.34s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-078000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-078000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-078000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-078000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-078000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-078000 start --dry-run
--- PASS: TestErrorSpam/start (0.34s)

                                                
                                    
x
+
TestErrorSpam/status (0.26s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-078000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-078000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-078000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-078000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-078000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-078000 status
--- PASS: TestErrorSpam/status (0.26s)

                                                
                                    
x
+
TestErrorSpam/pause (0.63s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-078000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-078000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-078000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-078000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-078000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-078000 pause
--- PASS: TestErrorSpam/pause (0.63s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.58s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-078000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-078000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-078000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-078000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-078000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-078000 unpause
--- PASS: TestErrorSpam/unpause (0.58s)

                                                
                                    
x
+
TestErrorSpam/stop (3.23s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-078000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-078000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-078000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-078000 stop: (3.07007275s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-078000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-078000 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-078000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-078000 stop
--- PASS: TestErrorSpam/stop (3.23s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/17297-1010/.minikube/files/etc/test/nested/copy/1469/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (45.4s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-742000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2230: (dbg) Done: out/minikube-darwin-arm64 start -p functional-742000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : (45.402037417s)
--- PASS: TestFunctional/serial/StartWithProxy (45.40s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (33.59s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-742000 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-darwin-arm64 start -p functional-742000 --alsologtostderr -v=8: (33.58624825s)
functional_test.go:659: soft start took 33.586675333s for "functional-742000" cluster.
--- PASS: TestFunctional/serial/SoftStart (33.59s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.03s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-742000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.4s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-742000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-742000 cache add registry.k8s.io/pause:3.1: (1.206292792s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-742000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-742000 cache add registry.k8s.io/pause:3.3: (1.144699833s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-742000 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-742000 cache add registry.k8s.io/pause:latest: (1.053315291s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.40s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-742000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialCacheCmdcacheadd_local3281524433/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-arm64 -p functional-742000 cache add minikube-local-cache-test:functional-742000
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-arm64 -p functional-742000 cache delete minikube-local-cache-test:functional-742000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-742000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-arm64 -p functional-742000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.93s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-arm64 -p functional-742000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-arm64 -p functional-742000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-742000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (85.343ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-arm64 -p functional-742000 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-arm64 -p functional-742000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (0.93s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-arm64 -p functional-742000 kubectl -- --context functional-742000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.45s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.53s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-742000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.53s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (36.68s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-742000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-darwin-arm64 start -p functional-742000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (36.680004083s)
functional_test.go:757: restart took 36.680118667s for "functional-742000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (36.68s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-742000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.62s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-arm64 -p functional-742000 logs
--- PASS: TestFunctional/serial/LogsCmd (0.62s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.59s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-arm64 -p functional-742000 logs --file /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialLogsFileCmd1215289172/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.59s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.26s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-742000 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-darwin-arm64 service invalid-svc -p functional-742000
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-darwin-arm64 service invalid-svc -p functional-742000: exit status 115 (114.822125ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.105.4:30718 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-742000 delete -f testdata/invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-742000 delete -f testdata/invalidsvc.yaml: (1.029840625s)
--- PASS: TestFunctional/serial/InvalidService (4.26s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-742000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-742000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-742000 config get cpus: exit status 14 (27.760416ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-742000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-742000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-742000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-742000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-742000 config get cpus: exit status 14 (28.644208ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (7.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-742000 --alsologtostderr -v=1]
E0925 04:10:47.339287    1469 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/client.crt: no such file or directory
2023/09/25 04:10:51 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:906: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-742000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 3683: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (7.91s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-742000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-742000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (115.762875ms)

                                                
                                                
-- stdout --
	* [functional-742000] minikube v1.31.2 on Darwin 13.6 (arm64)
	  - MINIKUBE_LOCATION=17297
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17297-1010/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17297-1010/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 04:10:43.835592    3670 out.go:296] Setting OutFile to fd 1 ...
	I0925 04:10:43.835725    3670 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 04:10:43.835728    3670 out.go:309] Setting ErrFile to fd 2...
	I0925 04:10:43.835731    3670 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 04:10:43.835866    3670 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17297-1010/.minikube/bin
	I0925 04:10:43.836994    3670 out.go:303] Setting JSON to false
	I0925 04:10:43.852637    3670 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2418,"bootTime":1695637825,"procs":415,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.6","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0925 04:10:43.852718    3670 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0925 04:10:43.857190    3670 out.go:177] * [functional-742000] minikube v1.31.2 on Darwin 13.6 (arm64)
	I0925 04:10:43.865372    3670 out.go:177]   - MINIKUBE_LOCATION=17297
	I0925 04:10:43.869314    3670 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17297-1010/kubeconfig
	I0925 04:10:43.865491    3670 notify.go:220] Checking for updates...
	I0925 04:10:43.875418    3670 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0925 04:10:43.878374    3670 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 04:10:43.881337    3670 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17297-1010/.minikube
	I0925 04:10:43.888223    3670 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0925 04:10:43.892568    3670 config.go:182] Loaded profile config "functional-742000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0925 04:10:43.892814    3670 driver.go:373] Setting default libvirt URI to qemu:///system
	I0925 04:10:43.896334    3670 out.go:177] * Using the qemu2 driver based on existing profile
	I0925 04:10:43.903324    3670 start.go:298] selected driver: qemu2
	I0925 04:10:43.903329    3670 start.go:902] validating driver "qemu2" against &{Name:functional-742000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.2 ClusterName:functional-742000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false E
xtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0925 04:10:43.903371    3670 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0925 04:10:43.909338    3670 out.go:177] 
	W0925 04:10:43.913374    3670 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0925 04:10:43.916424    3670 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-742000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-742000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-742000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (110.609708ms)

                                                
                                                
-- stdout --
	* [functional-742000] minikube v1.31.2 sur Darwin 13.6 (arm64)
	  - MINIKUBE_LOCATION=17297
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17297-1010/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17297-1010/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 04:10:43.721611    3666 out.go:296] Setting OutFile to fd 1 ...
	I0925 04:10:43.721722    3666 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 04:10:43.721725    3666 out.go:309] Setting ErrFile to fd 2...
	I0925 04:10:43.721728    3666 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 04:10:43.721851    3666 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17297-1010/.minikube/bin
	I0925 04:10:43.723250    3666 out.go:303] Setting JSON to false
	I0925 04:10:43.740799    3666 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2418,"bootTime":1695637825,"procs":415,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.6","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0925 04:10:43.740893    3666 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0925 04:10:43.746382    3666 out.go:177] * [functional-742000] minikube v1.31.2 sur Darwin 13.6 (arm64)
	I0925 04:10:43.754443    3666 out.go:177]   - MINIKUBE_LOCATION=17297
	I0925 04:10:43.758347    3666 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17297-1010/kubeconfig
	I0925 04:10:43.754537    3666 notify.go:220] Checking for updates...
	I0925 04:10:43.762513    3666 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0925 04:10:43.765344    3666 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 04:10:43.768368    3666 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17297-1010/.minikube
	I0925 04:10:43.771464    3666 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0925 04:10:43.774647    3666 config.go:182] Loaded profile config "functional-742000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0925 04:10:43.774877    3666 driver.go:373] Setting default libvirt URI to qemu:///system
	I0925 04:10:43.779336    3666 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0925 04:10:43.786368    3666 start.go:298] selected driver: qemu2
	I0925 04:10:43.786374    3666 start.go:902] validating driver "qemu2" against &{Name:functional-742000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.2 ClusterName:functional-742000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false E
xtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0925 04:10:43.786418    3666 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0925 04:10:43.792361    3666 out.go:177] 
	W0925 04:10:43.796273    3666 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0925 04:10:43.800339    3666 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-arm64 -p functional-742000 status
functional_test.go:856: (dbg) Run:  out/minikube-darwin-arm64 -p functional-742000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-darwin-arm64 -p functional-742000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-darwin-arm64 -p functional-742000 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-darwin-arm64 -p functional-742000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-darwin-arm64 -p functional-742000 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-darwin-arm64 -p functional-742000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-742000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-742000 ssh -n functional-742000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-742000 cp functional-742000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd2956233808/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-742000 ssh -n functional-742000 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/1469/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-arm64 -p functional-742000 ssh "sudo cat /etc/test/nested/copy/1469/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/1469.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-742000 ssh "sudo cat /etc/ssl/certs/1469.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/1469.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-742000 ssh "sudo cat /usr/share/ca-certificates/1469.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-742000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/14692.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-742000 ssh "sudo cat /etc/ssl/certs/14692.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/14692.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-742000 ssh "sudo cat /usr/share/ca-certificates/14692.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-742000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-742000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-arm64 -p functional-742000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-742000 ssh "sudo systemctl is-active crio": exit status 1 (72.130083ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-742000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-742000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-742000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-742000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 3460: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-742000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-742000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [c57bd663-b576-4c45-ae7d-3148aa5df0c1] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [c57bd663-b576-4c45-ae7d-3148aa5df0c1] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.007057209s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.12s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-742000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.102.188.37 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-742000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1436: (dbg) Run:  kubectl --context functional-742000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-742000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-759d89bdcc-shxqc" [58d6086e-707d-4d21-bfda-49e3adf87469] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-759d89bdcc-shxqc" [58d6086e-707d-4d21-bfda-49e3adf87469] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.009060791s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-darwin-arm64 -p functional-742000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-darwin-arm64 -p functional-742000 service list -o json
functional_test.go:1493: Took "289.69675ms" to run "out/minikube-darwin-arm64 -p functional-742000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-darwin-arm64 -p functional-742000 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.105.4:30506
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-darwin-arm64 -p functional-742000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-darwin-arm64 -p functional-742000 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.105.4:30506
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1314: Took "120.894417ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1328: Took "32.713834ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1365: Took "118.499666ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1378: Took "32.691584ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (5.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-742000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3750507592/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1695640212723283000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3750507592/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1695640212723283000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3750507592/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1695640212723283000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3750507592/001/test-1695640212723283000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-742000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-742000 ssh "findmnt -T /mount-9p | grep 9p": exit status 80 (57.052542ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get machine status: state: connect: dial unix /Users/jenkins/minikube-integration/17297-1010/.minikube/machines/functional-742000/monitor: connect: connection refused
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_mount_e7c129cfe8a5cfe63f749cd4b5d06b5292c14d66_0.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-742000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-arm64 -p functional-742000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 25 11:10 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 25 11:10 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 25 11:10 test-1695640212723283000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-arm64 -p functional-742000 ssh cat /mount-9p/test-1695640212723283000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-742000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [34a89551-1f8d-4e3c-bf25-681dddd4ac3d] Pending
helpers_test.go:344: "busybox-mount" [34a89551-1f8d-4e3c-bf25-681dddd4ac3d] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [34a89551-1f8d-4e3c-bf25-681dddd4ac3d] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [34a89551-1f8d-4e3c-bf25-681dddd4ac3d] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.006481042s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-742000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-742000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-742000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-742000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-742000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3750507592/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (5.42s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-arm64 -p functional-742000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-arm64 -p functional-742000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-742000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-742000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.2
registry.k8s.io/kube-proxy:v1.28.2
registry.k8s.io/kube-controller-manager:v1.28.2
registry.k8s.io/kube-apiserver:v1.28.2
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-742000
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-742000
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-742000 image ls --format short --alsologtostderr:
I0925 04:11:04.214543    3842 out.go:296] Setting OutFile to fd 1 ...
I0925 04:11:04.214917    3842 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0925 04:11:04.214922    3842 out.go:309] Setting ErrFile to fd 2...
I0925 04:11:04.214924    3842 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0925 04:11:04.215076    3842 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17297-1010/.minikube/bin
I0925 04:11:04.215573    3842 config.go:182] Loaded profile config "functional-742000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I0925 04:11:04.215632    3842 config.go:182] Loaded profile config "functional-742000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I0925 04:11:04.216448    3842 ssh_runner.go:195] Run: systemctl --version
I0925 04:11:04.216464    3842 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/functional-742000/id_rsa Username:docker}
I0925 04:11:04.254300    3842 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-742000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-742000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/kubernetesui/dashboard            | <none>            | 20b332c9a70d8 | 244MB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | a422e0e982356 | 42.3MB |
| gcr.io/k8s-minikube/busybox                 | latest            | 71a676dd070f4 | 1.41MB |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| docker.io/localhost/my-image                | functional-742000 | 930fa68dec003 | 1.41MB |
| docker.io/library/minikube-local-cache-test | functional-742000 | c6b9baf40f0e1 | 30B    |
| registry.k8s.io/kube-controller-manager     | v1.28.2           | 89d57b83c1786 | 116MB  |
| docker.io/library/nginx                     | alpine            | fa0c6bb795403 | 43.4MB |
| registry.k8s.io/coredns/coredns             | v1.10.1           | 97e04611ad434 | 51.4MB |
| registry.k8s.io/pause                       | 3.9               | 829e9de338bd5 | 514kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| registry.k8s.io/kube-apiserver              | v1.28.2           | 30bb499447fe1 | 120MB  |
| registry.k8s.io/kube-scheduler              | v1.28.2           | 64fc40cee3716 | 57.8MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| gcr.io/google-containers/addon-resizer      | functional-742000 | ffd4cfbbe753e | 32.9MB |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| registry.k8s.io/kube-proxy                  | v1.28.2           | 7da62c127fc0f | 68.3MB |
| registry.k8s.io/etcd                        | 3.5.9-0           | 9cdd6470f48c8 | 181MB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-742000 image ls --format table --alsologtostderr:
I0925 04:11:05.989324    3854 out.go:296] Setting OutFile to fd 1 ...
I0925 04:11:05.989508    3854 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0925 04:11:05.989515    3854 out.go:309] Setting ErrFile to fd 2...
I0925 04:11:05.989518    3854 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0925 04:11:05.989666    3854 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17297-1010/.minikube/bin
I0925 04:11:05.990122    3854 config.go:182] Loaded profile config "functional-742000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I0925 04:11:05.990182    3854 config.go:182] Loaded profile config "functional-742000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I0925 04:11:05.991031    3854 ssh_runner.go:195] Run: systemctl --version
I0925 04:11:05.991043    3854 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/functional-742000/id_rsa Username:docker}
I0925 04:11:06.030955    3854 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-742000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-742000 image ls --format json --alsologtostderr:
[{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"244000000"},{"id":"7da62c127fc0f2c3473babe4dd0fe1da874278c4e524a490b1781e3e0e6dddfa","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.28.2"],"size":"68300000"},{"id":"97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"51400000"},{"id":"89d57b83c17862d0ca2dd214e9e5ad425f8d67ecba32d10b846f8d22d3b5597c","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.2"],"size":"116000000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f7
78c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"930fa68dec0034e70d2bae2dfcaff23745c3d376bb8120c485f7e663d10b05d3","repoDigests":[],"repoTags":["docker.io/localhost/my-image:functional-742000"],"size":"1410000"},{"id":"c6b9baf40f0e1b3af82006f2f5d14b4540bbd2498d7d9473233101dd7e5bac70","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-742000"],"size":"30"},{"id":"9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"181000000"},{"id":"71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1410000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"64fc40cee3716a4596d219b360ce536adb7998eaeae3f5dbb774d6503f5039d7","repoDigests":[],"repo
Tags":["registry.k8s.io/kube-scheduler:v1.28.2"],"size":"57800000"},{"id":"fa0c6bb795403f8762e5cbf7b9f395aa036e7bd61c707485c1968b79bb3da9f1","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"43400000"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"42300000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-742000"],"size":"32900000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"30bb499447fe1bc655853e2e8ac386cdeb28c80263536259cb54f290f9a58d6c","repoDigests":[],"repoTags":["registry.k8
s.io/kube-apiserver:v1.28.2"],"size":"120000000"},{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"514000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-742000 image ls --format json --alsologtostderr:
I0925 04:11:05.909422    3852 out.go:296] Setting OutFile to fd 1 ...
I0925 04:11:05.909578    3852 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0925 04:11:05.909581    3852 out.go:309] Setting ErrFile to fd 2...
I0925 04:11:05.909583    3852 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0925 04:11:05.909722    3852 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17297-1010/.minikube/bin
I0925 04:11:05.910212    3852 config.go:182] Loaded profile config "functional-742000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I0925 04:11:05.910274    3852 config.go:182] Loaded profile config "functional-742000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I0925 04:11:05.911128    3852 ssh_runner.go:195] Run: systemctl --version
I0925 04:11:05.911137    3852 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/functional-742000/id_rsa Username:docker}
I0925 04:11:05.949779    3852 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-742000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-742000 image ls --format yaml --alsologtostderr:
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "514000"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "42300000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: 64fc40cee3716a4596d219b360ce536adb7998eaeae3f5dbb774d6503f5039d7
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.2
size: "57800000"
- id: 89d57b83c17862d0ca2dd214e9e5ad425f8d67ecba32d10b846f8d22d3b5597c
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.2
size: "116000000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: c6b9baf40f0e1b3af82006f2f5d14b4540bbd2498d7d9473233101dd7e5bac70
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-742000
size: "30"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "244000000"
- id: 9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "181000000"
- id: 97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "51400000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-742000
size: "32900000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 30bb499447fe1bc655853e2e8ac386cdeb28c80263536259cb54f290f9a58d6c
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.2
size: "120000000"
- id: fa0c6bb795403f8762e5cbf7b9f395aa036e7bd61c707485c1968b79bb3da9f1
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "43400000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 7da62c127fc0f2c3473babe4dd0fe1da874278c4e524a490b1781e3e0e6dddfa
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.28.2
size: "68300000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-742000 image ls --format yaml --alsologtostderr:
I0925 04:11:04.295068    3844 out.go:296] Setting OutFile to fd 1 ...
I0925 04:11:04.295204    3844 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0925 04:11:04.295208    3844 out.go:309] Setting ErrFile to fd 2...
I0925 04:11:04.295210    3844 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0925 04:11:04.295336    3844 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17297-1010/.minikube/bin
I0925 04:11:04.295804    3844 config.go:182] Loaded profile config "functional-742000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I0925 04:11:04.295870    3844 config.go:182] Loaded profile config "functional-742000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I0925 04:11:04.296710    3844 ssh_runner.go:195] Run: systemctl --version
I0925 04:11:04.296724    3844 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/functional-742000/id_rsa Username:docker}
I0925 04:11:04.335676    3844 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (1.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-arm64 -p functional-742000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-742000 ssh pgrep buildkitd: exit status 1 (71.865833ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 -p functional-742000 image build -t localhost/my-image:functional-742000 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-darwin-arm64 -p functional-742000 image build -t localhost/my-image:functional-742000 testdata/build --alsologtostderr: (1.378243917s)
functional_test.go:319: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-742000 image build -t localhost/my-image:functional-742000 testdata/build --alsologtostderr:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
a01966dde7f8: Pulling fs layer
a01966dde7f8: Verifying Checksum
a01966dde7f8: Download complete
a01966dde7f8: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> 71a676dd070f
Step 2/3 : RUN true
---> Running in a66a1f3faaad
Removing intermediate container a66a1f3faaad
---> d1b904714cb0
Step 3/3 : ADD content.txt /
---> 930fa68dec00
Successfully built 930fa68dec00
Successfully tagged localhost/my-image:functional-742000
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-742000 image build -t localhost/my-image:functional-742000 testdata/build --alsologtostderr:
I0925 04:11:04.449255    3848 out.go:296] Setting OutFile to fd 1 ...
I0925 04:11:04.449486    3848 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0925 04:11:04.449492    3848 out.go:309] Setting ErrFile to fd 2...
I0925 04:11:04.449494    3848 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0925 04:11:04.449630    3848 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17297-1010/.minikube/bin
I0925 04:11:04.450131    3848 config.go:182] Loaded profile config "functional-742000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I0925 04:11:04.450835    3848 config.go:182] Loaded profile config "functional-742000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I0925 04:11:04.451695    3848 ssh_runner.go:195] Run: systemctl --version
I0925 04:11:04.451707    3848 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17297-1010/.minikube/machines/functional-742000/id_rsa Username:docker}
I0925 04:11:04.490339    3848 build_images.go:151] Building image from path: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.4130164339.tar
I0925 04:11:04.490407    3848 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0925 04:11:04.494092    3848 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.4130164339.tar
I0925 04:11:04.495716    3848 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.4130164339.tar: stat -c "%s %y" /var/lib/minikube/build/build.4130164339.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.4130164339.tar': No such file or directory
I0925 04:11:04.495732    3848 ssh_runner.go:362] scp /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.4130164339.tar --> /var/lib/minikube/build/build.4130164339.tar (3072 bytes)
I0925 04:11:04.502962    3848 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.4130164339
I0925 04:11:04.505896    3848 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.4130164339 -xf /var/lib/minikube/build/build.4130164339.tar
I0925 04:11:04.509383    3848 docker.go:340] Building image: /var/lib/minikube/build/build.4130164339
I0925 04:11:04.509425    3848 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-742000 /var/lib/minikube/build/build.4130164339
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
I0925 04:11:05.787909    3848 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-742000 /var/lib/minikube/build/build.4130164339: (1.278466417s)
I0925 04:11:05.787983    3848 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.4130164339
I0925 04:11:05.791106    3848 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.4130164339.tar
I0925 04:11:05.793938    3848 build_images.go:207] Built localhost/my-image:functional-742000 from /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.4130164339.tar
I0925 04:11:05.793961    3848 build_images.go:123] succeeded building to: functional-742000
I0925 04:11:05.793967    3848 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-742000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (1.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.59467025s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-742000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-arm64 -p functional-742000 image load --daemon gcr.io/google-containers/addon-resizer:functional-742000 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-darwin-arm64 -p functional-742000 image load --daemon gcr.io/google-containers/addon-resizer:functional-742000 --alsologtostderr: (1.968990458s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-742000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-arm64 -p functional-742000 image load --daemon gcr.io/google-containers/addon-resizer:functional-742000 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-darwin-arm64 -p functional-742000 image load --daemon gcr.io/google-containers/addon-resizer:functional-742000 --alsologtostderr: (1.429919875s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-742000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (3.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.497529125s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-742000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-arm64 -p functional-742000 image load --daemon gcr.io/google-containers/addon-resizer:functional-742000 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-darwin-arm64 -p functional-742000 image load --daemon gcr.io/google-containers/addon-resizer:functional-742000 --alsologtostderr: (1.897661375s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-742000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (3.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-arm64 -p functional-742000 image save gcr.io/google-containers/addon-resizer:functional-742000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-arm64 -p functional-742000 image rm gcr.io/google-containers/addon-resizer:functional-742000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-742000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-arm64 -p functional-742000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-742000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-742000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-arm64 -p functional-742000 image save --daemon gcr.io/google-containers/addon-resizer:functional-742000 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-742000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-742000 docker-env) && out/minikube-darwin-arm64 status -p functional-742000"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-742000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-742000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-742000 update-context --alsologtostderr -v=2
E0925 04:11:07.821415    1469 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/client.crt: no such file or directory
E0925 04:11:48.783599    1469 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/client.crt: no such file or directory
E0925 04:13:10.705804    1469 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-742000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.11s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-742000
--- PASS: TestFunctional/delete_addon-resizer_images (0.11s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-742000
--- PASS: TestFunctional/delete_my-image_image (0.04s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-742000
--- PASS: TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (32.22s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-543000 --driver=qemu2 
image_test.go:69: (dbg) Done: out/minikube-darwin-arm64 start -p image-543000 --driver=qemu2 : (32.220247334s)
--- PASS: TestImageBuild/serial/Setup (32.22s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (0.98s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-543000
--- PASS: TestImageBuild/serial/NormalBuild (0.98s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.12s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-543000
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.12s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.11s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-543000
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.11s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (63.08s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-darwin-arm64 start -p ingress-addon-legacy-907000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=qemu2 
E0925 04:14:18.708479    1469 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/functional-742000/client.crt: no such file or directory
E0925 04:14:18.713661    1469 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/functional-742000/client.crt: no such file or directory
E0925 04:14:18.725764    1469 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/functional-742000/client.crt: no such file or directory
E0925 04:14:18.746000    1469 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/functional-742000/client.crt: no such file or directory
E0925 04:14:18.786861    1469 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/functional-742000/client.crt: no such file or directory
E0925 04:14:18.867020    1469 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/functional-742000/client.crt: no such file or directory
E0925 04:14:19.027413    1469 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/functional-742000/client.crt: no such file or directory
E0925 04:14:19.348362    1469 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/functional-742000/client.crt: no such file or directory
E0925 04:14:19.990477    1469 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/functional-742000/client.crt: no such file or directory
E0925 04:14:21.272657    1469 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/functional-742000/client.crt: no such file or directory
E0925 04:14:23.834756    1469 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/functional-742000/client.crt: no such file or directory
E0925 04:14:28.957160    1469 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/functional-742000/client.crt: no such file or directory
E0925 04:14:39.199377    1469 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/functional-742000/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-darwin-arm64 start -p ingress-addon-legacy-907000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=qemu2 : (1m3.076534417s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (63.08s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (13.34s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-907000 addons enable ingress --alsologtostderr -v=5
E0925 04:14:59.680422    1469 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/functional-742000/client.crt: no such file or directory
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-darwin-arm64 -p ingress-addon-legacy-907000 addons enable ingress --alsologtostderr -v=5: (13.335242333s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (13.34s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.19s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-907000 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.19s)

                                                
                                    
x
+
TestJSONOutput/start/Command (41.83s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-570000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 start -p json-output-570000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : (41.825080792s)
--- PASS: TestJSONOutput/start/Command (41.83s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.28s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-570000 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.28s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.23s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-570000 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.23s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (12.07s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-570000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-570000 --output=json --user=testUser: (12.071150416s)
--- PASS: TestJSONOutput/stop/Command (12.07s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.31s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-346000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-346000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (87.920292ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a79dc12d-dade-4deb-be07-1e93b376a52f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-346000] minikube v1.31.2 on Darwin 13.6 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"d81b3fcf-95d3-4b40-9858-c7d87af9a742","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17297"}}
	{"specversion":"1.0","id":"c2a7c477-f021-4f4c-9278-322aab7d1810","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/17297-1010/kubeconfig"}}
	{"specversion":"1.0","id":"091a9c16-610b-46e1-9d26-9223a4384e42","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"7ab71c09-69e2-453c-9ef0-fda863fb833b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"a4f5f38e-89e0-4871-85b4-22b1d0045f94","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/17297-1010/.minikube"}}
	{"specversion":"1.0","id":"dfed2c4d-bce5-4cc7-ad19-8d913e37a392","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"855b2da6-6ac7-4dd1-b97f-9e75df5d3ad9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-346000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-346000
--- PASS: TestErrorJSONOutput (0.31s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestMinikubeProfile (63.35s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-062000 --driver=qemu2 
E0925 04:17:02.564210    1469 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/functional-742000/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-arm64 start -p first-062000 --driver=qemu2 : (29.999821708s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p second-064000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-arm64 start -p second-064000 --driver=qemu2 : (32.592427333s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 profile first-062000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 profile second-064000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-064000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-064000
helpers_test.go:175: Cleaning up "first-062000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-062000
--- PASS: TestMinikubeProfile (63.35s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-139000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-139000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (94.819375ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-139000] minikube v1.31.2 on Darwin 13.6 (arm64)
	  - MINIKUBE_LOCATION=17297
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17297-1010/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17297-1010/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-139000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-139000 "sudo systemctl is-active --quiet service kubelet": exit status 89 (43.251042ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p NoKubernetes-139000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.14s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-139000
--- PASS: TestNoKubernetes/serial/Stop (0.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-139000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-139000 "sudo systemctl is-active --quiet service kubelet": exit status 89 (42.349291ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p NoKubernetes-139000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-925000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-925000 -n old-k8s-version-925000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-925000 -n old-k8s-version-925000: exit status 7 (26.217166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-925000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-583000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/no-preload/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-583000 -n no-preload-583000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-583000 -n no-preload-583000: exit status 7 (28.585292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-583000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-064000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/embed-certs/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-064000 -n embed-certs-064000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-064000 -n embed-certs-064000: exit status 7 (26.794333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-064000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-941000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-941000 -n default-k8s-diff-port-941000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-941000 -n default-k8s-diff-port-941000: exit status 7 (27.464833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-941000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-140000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-140000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/newest-cni/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-140000 -n newest-cni-140000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-140000 -n newest-cni-140000: exit status 7 (27.985041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-140000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (24/255)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:210: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:420: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:474: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (14.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-742000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port3528111918/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-742000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-742000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (69.57225ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-742000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-742000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (71.162834ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-742000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-742000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (72.070583ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-742000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-742000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (69.184125ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-742000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-742000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (71.119958ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-742000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-742000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (69.726667ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-742000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-742000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (69.669875ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
E0925 04:10:26.840765    1469 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/client.crt: no such file or directory
E0925 04:10:26.847523    1469 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/client.crt: no such file or directory
E0925 04:10:26.859589    1469 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/client.crt: no such file or directory
E0925 04:10:26.881640    1469 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/client.crt: no such file or directory
E0925 04:10:26.923691    1469 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/client.crt: no such file or directory
E0925 04:10:27.005752    1469 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/client.crt: no such file or directory
E0925 04:10:27.167790    1469 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/client.crt: no such file or directory
E0925 04:10:27.489853    1469 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/client.crt: no such file or directory
E0925 04:10:28.131954    1469 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/client.crt: no such file or directory
E0925 04:10:29.413699    1469 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/client.crt: no such file or directory
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-742000 ssh "findmnt -T /mount-9p | grep 9p"
E0925 04:10:31.975816    1469 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/client.crt: no such file or directory
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-742000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (69.974083ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:251: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-742000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-742000 ssh "sudo umount -f /mount-9p": exit status 1 (69.595458ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-742000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-742000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port3528111918/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- SKIP: TestFunctional/parallel/MountCmd/specific-port (14.06s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (11.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-742000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3133471315/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-742000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3133471315/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-742000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3133471315/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-742000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-742000 ssh "findmnt -T" /mount1: exit status 1 (92.274958ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-742000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-742000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-742000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-742000 ssh "findmnt -T" /mount3: exit status 1 (68.913875ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-742000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-742000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-742000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-742000 ssh "findmnt -T" /mount3: exit status 1 (69.38275ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-742000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-742000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-742000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-742000 ssh "findmnt -T" /mount3: exit status 1 (67.534709ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-742000 ssh "findmnt -T" /mount1
E0925 04:10:37.097132    1469 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1010/.minikube/profiles/addons-183000/client.crt: no such file or directory
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-742000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-742000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-742000 ssh "findmnt -T" /mount3: exit status 1 (67.926958ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-742000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-742000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-742000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-742000 ssh "findmnt -T" /mount3: exit status 1 (69.692708ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-742000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-742000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-742000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-742000 ssh "findmnt -T" /mount3: exit status 1 (67.606708ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:340: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-742000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3133471315/001:/mount1 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-742000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3133471315/001:/mount2 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-742000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3133471315/001:/mount3 --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/VerifyCleanup (11.22s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:297: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-570000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-570000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-570000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-570000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-570000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-570000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-570000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-570000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-570000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-570000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-570000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-570000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-570000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-570000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-570000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-570000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-570000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-570000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-570000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-570000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-570000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-570000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-570000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-570000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-570000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-570000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-570000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-570000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-570000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-570000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-570000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-570000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-570000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-570000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-570000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-570000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-570000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-570000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-570000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-570000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-570000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-570000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-570000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-570000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-570000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-570000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-570000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-570000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-570000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-570000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-570000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-570000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-570000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-570000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-570000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-570000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-570000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-570000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-570000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-570000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-570000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-570000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-570000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-570000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-570000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-570000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-570000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-570000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-570000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-570000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-570000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-570000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-570000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-570000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-570000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-570000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-570000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-570000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-570000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-570000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-570000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-570000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-570000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-570000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-570000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-570000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-570000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-570000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-570000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-570000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-570000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-570000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-570000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-570000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-570000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-570000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-570000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-570000"

                                                
                                                
----------------------- debugLogs end: cilium-570000 [took: 2.060137834s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-570000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-570000
--- SKIP: TestNetworkPlugins/group/cilium (2.29s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-939000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-939000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.23s)

                                                
                                    
Copied to clipboard