Test Report: QEMU_macOS 17243

                    
                      a4c3e20099a4bdf499fee0d2faaf79bc020e16c9:2023-09-14:31017
                    
                

Test fail (91/255)

Order failed test Duration
3 TestDownloadOnly/v1.16.0/json-events 25.38
7 TestDownloadOnly/v1.16.0/kubectl 0
20 TestOffline 9.93
24 TestAddons/parallel/Registry 720.78
25 TestAddons/parallel/Ingress 0.76
26 TestAddons/parallel/InspektorGadget 480.83
30 TestAddons/parallel/CSI 374.09
32 TestAddons/parallel/CloudSpanner 818.09
37 TestCertOptions 10.12
38 TestCertExpiration 195.46
39 TestDockerFlags 10.12
40 TestForceSystemdFlag 12.03
41 TestForceSystemdEnv 9.92
86 TestFunctional/parallel/ServiceCmdConnect 31.51
88 TestFunctional/parallel/PersistentVolumeClaim 240.98
153 TestImageBuild/serial/BuildWithBuildArg 1.05
162 TestIngressAddonLegacy/serial/ValidateIngressAddons 56.32
197 TestMountStart/serial/StartWithMountFirst 10.59
200 TestMultiNode/serial/FreshStart2Nodes 10.4
201 TestMultiNode/serial/DeployApp2Nodes 115.24
202 TestMultiNode/serial/PingHostFrom2Pods 0.08
203 TestMultiNode/serial/AddNode 0.07
204 TestMultiNode/serial/ProfileList 0.1
205 TestMultiNode/serial/CopyFile 0.06
206 TestMultiNode/serial/StopNode 0.13
207 TestMultiNode/serial/StartAfterStop 0.11
208 TestMultiNode/serial/RestartKeepsNodes 5.38
209 TestMultiNode/serial/DeleteNode 0.1
210 TestMultiNode/serial/StopMultiNode 0.15
211 TestMultiNode/serial/RestartMultiNode 5.25
212 TestMultiNode/serial/ValidateNameConflict 19.98
216 TestPreload 9.88
218 TestScheduledStopUnix 9.94
219 TestSkaffold 13.28
222 TestRunningBinaryUpgrade 148.2
224 TestKubernetesUpgrade 15.36
237 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 1.37
238 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.04
239 TestStoppedBinaryUpgrade/Setup 162
241 TestPause/serial/Start 9.93
251 TestNoKubernetes/serial/StartWithK8s 9.83
252 TestNoKubernetes/serial/StartWithStopK8s 5.47
253 TestNoKubernetes/serial/Start 5.47
257 TestNoKubernetes/serial/StartNoArgs 5.47
259 TestNetworkPlugins/group/auto/Start 10.02
260 TestNetworkPlugins/group/kindnet/Start 9.73
261 TestNetworkPlugins/group/calico/Start 9.69
262 TestNetworkPlugins/group/custom-flannel/Start 10.01
263 TestNetworkPlugins/group/false/Start 9.71
264 TestNetworkPlugins/group/enable-default-cni/Start 9.85
265 TestNetworkPlugins/group/flannel/Start 9.79
266 TestNetworkPlugins/group/bridge/Start 9.76
267 TestNetworkPlugins/group/kubenet/Start 9.63
269 TestStartStop/group/old-k8s-version/serial/FirstStart 10.15
270 TestStoppedBinaryUpgrade/Upgrade 1.93
271 TestStoppedBinaryUpgrade/MinikubeLogs 0.12
273 TestStartStop/group/no-preload/serial/FirstStart 9.82
274 TestStartStop/group/old-k8s-version/serial/DeployApp 0.09
275 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.13
278 TestStartStop/group/old-k8s-version/serial/SecondStart 7.26
279 TestStartStop/group/no-preload/serial/DeployApp 0.09
280 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.11
283 TestStartStop/group/no-preload/serial/SecondStart 5.2
284 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
285 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.05
286 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
287 TestStartStop/group/old-k8s-version/serial/Pause 0.1
288 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
289 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
290 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.07
292 TestStartStop/group/embed-certs/serial/FirstStart 9.87
293 TestStartStop/group/no-preload/serial/Pause 0.11
295 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 11.51
296 TestStartStop/group/embed-certs/serial/DeployApp 0.1
297 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.11
300 TestStartStop/group/embed-certs/serial/SecondStart 7.07
301 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.09
302 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.11
305 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 5.21
306 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
307 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
308 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.07
309 TestStartStop/group/embed-certs/serial/Pause 0.1
310 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
311 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
312 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.08
314 TestStartStop/group/newest-cni/serial/FirstStart 9.87
315 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.11
320 TestStartStop/group/newest-cni/serial/SecondStart 5.25
323 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.07
324 TestStartStop/group/newest-cni/serial/Pause 0.1
x
+
TestDownloadOnly/v1.16.0/json-events (25.38s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-917000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-917000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=qemu2 : exit status 40 (25.375217084s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c3e69bc4-b0be-4ba6-878a-5ed3124acad3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-917000] minikube v1.31.2 on Darwin 13.5.2 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"c1cc06c4-f7b5-4ec1-a405-a33c820084a5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17243"}}
	{"specversion":"1.0","id":"c77ecc1f-ae7f-4dab-9b71-1740afb0e22b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/17243-1006/kubeconfig"}}
	{"specversion":"1.0","id":"09f95985-2500-4153-aef1-93585b81bad5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"953d2325-8a18-4fb1-ab0b-a512838e317c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"d2436472-434a-44e7-8e68-f0afb0cfb494","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/17243-1006/.minikube"}}
	{"specversion":"1.0","id":"2cc90b47-a076-4b5b-9c9a-3f3b88422f43","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"2253a578-c572-4f74-a238-b4f2389c1c29","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"775b825a-23f5-49aa-831d-665bd4a5e77c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"33e9b735-32e4-4f3a-86c2-687790fa587a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"925f8300-34ca-4b42-9733-542e5574bf45","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node download-only-917000 in cluster download-only-917000","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"ac72c25e-8aee-40a2-b451-dae3e1d0c0e2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.16.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"17c5b65a-3f29-4aea-b473-8be542269f89","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 Dst:/Users/jenkins/minikube-integration/17243-1006/.minikube/cache/darwin/arm64/v1.16.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x10861fb20 0x10861fb20 0x10861fb20 0x10861fb20 0x10861fb20 0x10861fb20 0x10861fb20] Decompressors:map[bz2:0x140005336f0 gz:0x140005336f8 tar:0x140005336a0 tar.bz2:0x140005336b0 tar.gz:0x140005336c0 tar.xz:0x140005336d0 tar.zst:0x140005336e0 tbz2:0x140005336b0 tgz:0x140005
336c0 txz:0x140005336d0 tzst:0x140005336e0 xz:0x14000533700 zip:0x14000533710 zst:0x14000533708] Getters:map[file:0x14000062700 http:0x1400017e5a0 https:0x1400017e5f0] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"533e7c29-2519-43de-b805-1fcb9ee1f22e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 14:35:32.928760    1435 out.go:296] Setting OutFile to fd 1 ...
	I0914 14:35:32.928895    1435 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 14:35:32.928898    1435 out.go:309] Setting ErrFile to fd 2...
	I0914 14:35:32.928901    1435 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 14:35:32.929035    1435 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17243-1006/.minikube/bin
	W0914 14:35:32.929122    1435 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/17243-1006/.minikube/config/config.json: open /Users/jenkins/minikube-integration/17243-1006/.minikube/config/config.json: no such file or directory
	I0914 14:35:32.930265    1435 out.go:303] Setting JSON to true
	I0914 14:35:32.946630    1435 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":306,"bootTime":1694727026,"procs":399,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0914 14:35:32.946712    1435 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0914 14:35:32.952236    1435 out.go:97] [download-only-917000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0914 14:35:32.956218    1435 out.go:169] MINIKUBE_LOCATION=17243
	W0914 14:35:32.952390    1435 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/preloaded-tarball: no such file or directory
	I0914 14:35:32.952432    1435 notify.go:220] Checking for updates...
	I0914 14:35:32.963164    1435 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/17243-1006/kubeconfig
	I0914 14:35:32.966266    1435 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 14:35:32.969131    1435 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 14:35:32.972181    1435 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/17243-1006/.minikube
	W0914 14:35:32.978094    1435 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0914 14:35:32.978282    1435 driver.go:373] Setting default libvirt URI to qemu:///system
	I0914 14:35:32.983278    1435 out.go:97] Using the qemu2 driver based on user configuration
	I0914 14:35:32.983298    1435 start.go:298] selected driver: qemu2
	I0914 14:35:32.983301    1435 start.go:902] validating driver "qemu2" against <nil>
	I0914 14:35:32.983367    1435 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0914 14:35:32.986172    1435 out.go:169] Automatically selected the socket_vmnet network
	I0914 14:35:32.991532    1435 start_flags.go:384] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0914 14:35:32.991608    1435 start_flags.go:904] Wait components to verify : map[apiserver:true system_pods:true]
	I0914 14:35:32.991663    1435 cni.go:84] Creating CNI manager for ""
	I0914 14:35:32.991681    1435 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0914 14:35:32.991692    1435 start_flags.go:321] config:
	{Name:download-only-917000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-917000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRun
time:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 14:35:32.996903    1435 iso.go:125] acquiring lock: {Name:mkd5b68b252c18146a825ff5365883948b3c6983 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 14:35:33.001169    1435 out.go:97] Downloading VM boot image ...
	I0914 14:35:33.001186    1435 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/iso/arm64/minikube-v1.31.0-1694625400-17243-arm64.iso
	I0914 14:35:43.903863    1435 out.go:97] Starting control plane node download-only-917000 in cluster download-only-917000
	I0914 14:35:43.903888    1435 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0914 14:35:44.020559    1435 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0914 14:35:44.020573    1435 cache.go:57] Caching tarball of preloaded images
	I0914 14:35:44.020814    1435 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0914 14:35:44.025893    1435 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0914 14:35:44.025905    1435 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0914 14:35:44.234154    1435 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4?checksum=md5:a000baffb0664b293d602f95ed25caa6 -> /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0914 14:35:57.209980    1435 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0914 14:35:57.210100    1435 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0914 14:35:57.852789    1435 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0914 14:35:57.852982    1435 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/download-only-917000/config.json ...
	I0914 14:35:57.853001    1435 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/download-only-917000/config.json: {Name:mk282f6e537d7ce3cce445646d350fe24efa799f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 14:35:57.853243    1435 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0914 14:35:57.853407    1435 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 -> /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/darwin/arm64/v1.16.0/kubectl
	I0914 14:35:58.230896    1435 out.go:169] 
	W0914 14:35:58.234643    1435 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 Dst:/Users/jenkins/minikube-integration/17243-1006/.minikube/cache/darwin/arm64/v1.16.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x10861fb20 0x10861fb20 0x10861fb20 0x10861fb20 0x10861fb20 0x10861fb20 0x10861fb20] Decompressors:map[bz2:0x140005336f0 gz:0x140005336f8 tar:0x140005336a0 tar.bz2:0x140005336b0 tar.gz:0x140005336c0 tar.xz:0x140005336d0 tar.zst:0x140005336e0 tbz2:0x140005336b0 tgz:0x140005336c0 txz:0x140005336d0 tzst:0x140005336e0 xz:0x14000533700 zip:0x14000533710 zst:0x14000533708] Getters:map[file:0x14000062700 http:0x1400017e5a0 https:0x1400017e5f0] Dir:false ProgressListener
:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0914 14:35:58.234667    1435 out_reason.go:110] 
	W0914 14:35:58.241744    1435 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 14:35:58.245578    1435 out.go:169] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:71: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-917000" "--force" "--alsologtostderr" "--kubernetes-version=v1.16.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.16.0/json-events (25.38s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:160: expected the file for binary exist at "/Users/jenkins/minikube-integration/17243-1006/.minikube/cache/darwin/arm64/v1.16.0/kubectl" but got error stat /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/darwin/arm64/v1.16.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (9.93s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-291000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-291000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.793119709s)

                                                
                                                
-- stdout --
	* [offline-docker-291000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17243
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17243-1006/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17243-1006/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node offline-docker-291000 in cluster offline-docker-291000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-291000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 15:18:52.281345    4109 out.go:296] Setting OutFile to fd 1 ...
	I0914 15:18:52.281506    4109 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 15:18:52.281509    4109 out.go:309] Setting ErrFile to fd 2...
	I0914 15:18:52.281511    4109 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 15:18:52.281657    4109 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17243-1006/.minikube/bin
	I0914 15:18:52.282761    4109 out.go:303] Setting JSON to false
	I0914 15:18:52.299575    4109 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2906,"bootTime":1694727026,"procs":407,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0914 15:18:52.299650    4109 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0914 15:18:52.304232    4109 out.go:177] * [offline-docker-291000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0914 15:18:52.312227    4109 out.go:177]   - MINIKUBE_LOCATION=17243
	I0914 15:18:52.312288    4109 notify.go:220] Checking for updates...
	I0914 15:18:52.316233    4109 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17243-1006/kubeconfig
	I0914 15:18:52.319235    4109 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 15:18:52.320447    4109 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 15:18:52.323246    4109 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17243-1006/.minikube
	I0914 15:18:52.326235    4109 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 15:18:52.329692    4109 config.go:182] Loaded profile config "multinode-463000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0914 15:18:52.329749    4109 driver.go:373] Setting default libvirt URI to qemu:///system
	I0914 15:18:52.333192    4109 out.go:177] * Using the qemu2 driver based on user configuration
	I0914 15:18:52.340209    4109 start.go:298] selected driver: qemu2
	I0914 15:18:52.340215    4109 start.go:902] validating driver "qemu2" against <nil>
	I0914 15:18:52.340221    4109 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 15:18:52.342250    4109 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0914 15:18:52.345138    4109 out.go:177] * Automatically selected the socket_vmnet network
	I0914 15:18:52.348291    4109 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 15:18:52.348314    4109 cni.go:84] Creating CNI manager for ""
	I0914 15:18:52.348322    4109 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0914 15:18:52.348325    4109 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0914 15:18:52.348331    4109 start_flags.go:321] config:
	{Name:offline-docker-291000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:offline-docker-291000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthS
ock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 15:18:52.352421    4109 iso.go:125] acquiring lock: {Name:mkd5b68b252c18146a825ff5365883948b3c6983 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 15:18:52.355329    4109 out.go:177] * Starting control plane node offline-docker-291000 in cluster offline-docker-291000
	I0914 15:18:52.363219    4109 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0914 15:18:52.363240    4109 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0914 15:18:52.363248    4109 cache.go:57] Caching tarball of preloaded images
	I0914 15:18:52.363306    4109 preload.go:174] Found /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0914 15:18:52.363312    4109 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0914 15:18:52.363377    4109 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/offline-docker-291000/config.json ...
	I0914 15:18:52.363389    4109 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/offline-docker-291000/config.json: {Name:mk61fc53b890729d99d2d602abf39629e1926ebb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 15:18:52.363582    4109 start.go:365] acquiring machines lock for offline-docker-291000: {Name:mk831f2bd5a9a0c932c8a59972319d68edb53d07 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 15:18:52.363610    4109 start.go:369] acquired machines lock for "offline-docker-291000" in 21.667µs
	I0914 15:18:52.363621    4109 start.go:93] Provisioning new machine with config: &{Name:offline-docker-291000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.28.1 ClusterName:offline-docker-291000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 15:18:52.363650    4109 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 15:18:52.372248    4109 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0914 15:18:52.386130    4109 start.go:159] libmachine.API.Create for "offline-docker-291000" (driver="qemu2")
	I0914 15:18:52.386159    4109 client.go:168] LocalClient.Create starting
	I0914 15:18:52.386234    4109 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/ca.pem
	I0914 15:18:52.386263    4109 main.go:141] libmachine: Decoding PEM data...
	I0914 15:18:52.386274    4109 main.go:141] libmachine: Parsing certificate...
	I0914 15:18:52.386319    4109 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/cert.pem
	I0914 15:18:52.386337    4109 main.go:141] libmachine: Decoding PEM data...
	I0914 15:18:52.386344    4109 main.go:141] libmachine: Parsing certificate...
	I0914 15:18:52.386678    4109 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17243-1006/.minikube/cache/iso/arm64/minikube-v1.31.0-1694625400-17243-arm64.iso...
	I0914 15:18:52.520518    4109 main.go:141] libmachine: Creating SSH key...
	I0914 15:18:52.616231    4109 main.go:141] libmachine: Creating Disk image...
	I0914 15:18:52.616239    4109 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 15:18:52.616401    4109 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/offline-docker-291000/disk.qcow2.raw /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/offline-docker-291000/disk.qcow2
	I0914 15:18:52.720549    4109 main.go:141] libmachine: STDOUT: 
	I0914 15:18:52.720578    4109 main.go:141] libmachine: STDERR: 
	I0914 15:18:52.720691    4109 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/offline-docker-291000/disk.qcow2 +20000M
	I0914 15:18:52.733803    4109 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 15:18:52.733828    4109 main.go:141] libmachine: STDERR: 
	I0914 15:18:52.733876    4109 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/offline-docker-291000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/offline-docker-291000/disk.qcow2
	I0914 15:18:52.733886    4109 main.go:141] libmachine: Starting QEMU VM...
	I0914 15:18:52.733950    4109 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/offline-docker-291000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/offline-docker-291000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/offline-docker-291000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:47:51:3c:9c:a7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/offline-docker-291000/disk.qcow2
	I0914 15:18:52.736249    4109 main.go:141] libmachine: STDOUT: 
	I0914 15:18:52.736278    4109 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 15:18:52.736314    4109 client.go:171] LocalClient.Create took 350.156291ms
	I0914 15:18:54.738338    4109 start.go:128] duration metric: createHost completed in 2.374732917s
	I0914 15:18:54.738360    4109 start.go:83] releasing machines lock for "offline-docker-291000", held for 2.374797167s
	W0914 15:18:54.738373    4109 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 15:18:54.747199    4109 out.go:177] * Deleting "offline-docker-291000" in qemu2 ...
	W0914 15:18:54.754950    4109 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 15:18:54.754959    4109 start.go:703] Will try again in 5 seconds ...
	I0914 15:18:59.755808    4109 start.go:365] acquiring machines lock for offline-docker-291000: {Name:mk831f2bd5a9a0c932c8a59972319d68edb53d07 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 15:18:59.756268    4109 start.go:369] acquired machines lock for "offline-docker-291000" in 328.291µs
	I0914 15:18:59.756386    4109 start.go:93] Provisioning new machine with config: &{Name:offline-docker-291000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.28.1 ClusterName:offline-docker-291000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 15:18:59.756662    4109 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 15:18:59.765883    4109 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0914 15:18:59.813380    4109 start.go:159] libmachine.API.Create for "offline-docker-291000" (driver="qemu2")
	I0914 15:18:59.813427    4109 client.go:168] LocalClient.Create starting
	I0914 15:18:59.813535    4109 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/ca.pem
	I0914 15:18:59.813595    4109 main.go:141] libmachine: Decoding PEM data...
	I0914 15:18:59.813613    4109 main.go:141] libmachine: Parsing certificate...
	I0914 15:18:59.813673    4109 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/cert.pem
	I0914 15:18:59.813709    4109 main.go:141] libmachine: Decoding PEM data...
	I0914 15:18:59.813726    4109 main.go:141] libmachine: Parsing certificate...
	I0914 15:18:59.814214    4109 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17243-1006/.minikube/cache/iso/arm64/minikube-v1.31.0-1694625400-17243-arm64.iso...
	I0914 15:18:59.938379    4109 main.go:141] libmachine: Creating SSH key...
	I0914 15:18:59.991963    4109 main.go:141] libmachine: Creating Disk image...
	I0914 15:18:59.991972    4109 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 15:18:59.992126    4109 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/offline-docker-291000/disk.qcow2.raw /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/offline-docker-291000/disk.qcow2
	I0914 15:19:00.000780    4109 main.go:141] libmachine: STDOUT: 
	I0914 15:19:00.000795    4109 main.go:141] libmachine: STDERR: 
	I0914 15:19:00.000851    4109 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/offline-docker-291000/disk.qcow2 +20000M
	I0914 15:19:00.008063    4109 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 15:19:00.008088    4109 main.go:141] libmachine: STDERR: 
	I0914 15:19:00.008104    4109 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/offline-docker-291000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/offline-docker-291000/disk.qcow2
	I0914 15:19:00.008111    4109 main.go:141] libmachine: Starting QEMU VM...
	I0914 15:19:00.008145    4109 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/offline-docker-291000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/offline-docker-291000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/offline-docker-291000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:54:24:18:1f:3e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/offline-docker-291000/disk.qcow2
	I0914 15:19:00.009781    4109 main.go:141] libmachine: STDOUT: 
	I0914 15:19:00.009793    4109 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 15:19:00.009806    4109 client.go:171] LocalClient.Create took 196.378541ms
	I0914 15:19:02.011844    4109 start.go:128] duration metric: createHost completed in 2.25521775s
	I0914 15:19:02.011872    4109 start.go:83] releasing machines lock for "offline-docker-291000", held for 2.255630167s
	W0914 15:19:02.012003    4109 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-291000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-291000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 15:19:02.021301    4109 out.go:177] 
	W0914 15:19:02.025188    4109 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 15:19:02.025198    4109 out.go:239] * 
	* 
	W0914 15:19:02.025674    4109 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 15:19:02.036294    4109 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-291000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:523: *** TestOffline FAILED at 2023-09-14 15:19:02.046448 -0700 PDT m=+2609.253762084
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-291000 -n offline-docker-291000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-291000 -n offline-docker-291000: exit status 7 (33.657959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-291000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-291000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-291000
--- FAIL: TestOffline (9.93s)

                                                
                                    
x
+
TestAddons/parallel/Registry (720.78s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:304: failed waiting for registry replicacontroller to stabilize: timed out waiting for the condition
addons_test.go:306: registry stabilized in 6m0.001591208s
addons_test.go:308: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
addons_test.go:308: ***** TestAddons/parallel/Registry: pod "actual-registry=true" failed to start within 6m0s: context deadline exceeded ****
addons_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p addons-388000 -n addons-388000
addons_test.go:308: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p addons-388000 -n addons-388000: exit status 7 (36.4125ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0914 14:55:07.506760    1874 status.go:249] status error: host: state: connect: dial unix /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/monitor: connect: connection refused

                                                
                                                
** /stderr **
addons_test.go:308: status error: exit status 7 (may be ok)
addons_test.go:308: "addons-388000" apiserver is not running, skipping kubectl commands (state="Nonexistent")
addons_test.go:309: failed waiting for pod actual-registry: actual-registry=true within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p addons-388000 -n addons-388000
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p addons-388000 logs -n 25
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-917000 | jenkins | v1.31.2 | 14 Sep 23 14:35 PDT |                     |
	|         | -p download-only-917000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| start   | -o=json --download-only        | download-only-917000 | jenkins | v1.31.2 | 14 Sep 23 14:35 PDT |                     |
	|         | -p download-only-917000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.31.2 | 14 Sep 23 14:36 PDT | 14 Sep 23 14:36 PDT |
	| delete  | -p download-only-917000        | download-only-917000 | jenkins | v1.31.2 | 14 Sep 23 14:36 PDT | 14 Sep 23 14:36 PDT |
	| delete  | -p download-only-917000        | download-only-917000 | jenkins | v1.31.2 | 14 Sep 23 14:36 PDT | 14 Sep 23 14:36 PDT |
	| start   | --download-only -p             | binary-mirror-231000 | jenkins | v1.31.2 | 14 Sep 23 14:36 PDT |                     |
	|         | binary-mirror-231000           |                      |         |         |                     |                     |
	|         | --alsologtostderr              |                      |         |         |                     |                     |
	|         | --binary-mirror                |                      |         |         |                     |                     |
	|         | http://127.0.0.1:49379         |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-231000        | binary-mirror-231000 | jenkins | v1.31.2 | 14 Sep 23 14:36 PDT | 14 Sep 23 14:36 PDT |
	| start   | -p addons-388000               | addons-388000        | jenkins | v1.31.2 | 14 Sep 23 14:36 PDT | 14 Sep 23 14:43 PDT |
	|         | --wait=true --memory=4000      |                      |         |         |                     |                     |
	|         | --alsologtostderr              |                      |         |         |                     |                     |
	|         | --addons=registry              |                      |         |         |                     |                     |
	|         | --addons=metrics-server        |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots       |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver   |                      |         |         |                     |                     |
	|         | --addons=gcp-auth              |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner         |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget      |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|         | --addons=ingress               |                      |         |         |                     |                     |
	|         | --addons=ingress-dns           |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p       | addons-388000        | jenkins | v1.31.2 | 14 Sep 23 14:55 PDT |                     |
	|         | addons-388000                  |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/14 14:36:23
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.21.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0914 14:36:23.572515    1522 out.go:296] Setting OutFile to fd 1 ...
	I0914 14:36:23.572636    1522 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 14:36:23.572639    1522 out.go:309] Setting ErrFile to fd 2...
	I0914 14:36:23.572642    1522 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 14:36:23.572752    1522 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17243-1006/.minikube/bin
	I0914 14:36:23.573756    1522 out.go:303] Setting JSON to false
	I0914 14:36:23.588610    1522 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":357,"bootTime":1694727026,"procs":391,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0914 14:36:23.588683    1522 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0914 14:36:23.593630    1522 out.go:177] * [addons-388000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0914 14:36:23.600459    1522 out.go:177]   - MINIKUBE_LOCATION=17243
	I0914 14:36:23.600497    1522 notify.go:220] Checking for updates...
	I0914 14:36:23.603591    1522 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17243-1006/kubeconfig
	I0914 14:36:23.606425    1522 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 14:36:23.609496    1522 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 14:36:23.612541    1522 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17243-1006/.minikube
	I0914 14:36:23.615423    1522 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 14:36:23.618648    1522 driver.go:373] Setting default libvirt URI to qemu:///system
	I0914 14:36:23.622479    1522 out.go:177] * Using the qemu2 driver based on user configuration
	I0914 14:36:23.629482    1522 start.go:298] selected driver: qemu2
	I0914 14:36:23.629487    1522 start.go:902] validating driver "qemu2" against <nil>
	I0914 14:36:23.629493    1522 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 14:36:23.631382    1522 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0914 14:36:23.634542    1522 out.go:177] * Automatically selected the socket_vmnet network
	I0914 14:36:23.637548    1522 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 14:36:23.637570    1522 cni.go:84] Creating CNI manager for ""
	I0914 14:36:23.637578    1522 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0914 14:36:23.637583    1522 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0914 14:36:23.637590    1522 start_flags.go:321] config:
	{Name:addons-388000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:addons-388000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID
:0 AutoPauseInterval:1m0s}
	I0914 14:36:23.641729    1522 iso.go:125] acquiring lock: {Name:mkd5b68b252c18146a825ff5365883948b3c6983 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 14:36:23.649492    1522 out.go:177] * Starting control plane node addons-388000 in cluster addons-388000
	I0914 14:36:23.653459    1522 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0914 14:36:23.653478    1522 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0914 14:36:23.653492    1522 cache.go:57] Caching tarball of preloaded images
	I0914 14:36:23.653557    1522 preload.go:174] Found /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0914 14:36:23.653564    1522 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0914 14:36:23.653811    1522 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/config.json ...
	I0914 14:36:23.653825    1522 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/config.json: {Name:mk9010c5dfb0ad4a966bb29118112217ba3b6cf7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 14:36:23.654041    1522 start.go:365] acquiring machines lock for addons-388000: {Name:mk831f2bd5a9a0c932c8a59972319d68edb53d07 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 14:36:23.654147    1522 start.go:369] acquired machines lock for "addons-388000" in 99.875µs
	I0914 14:36:23.654159    1522 start.go:93] Provisioning new machine with config: &{Name:addons-388000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.1 ClusterName:addons-388000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 14:36:23.654194    1522 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 14:36:23.662516    1522 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0914 14:36:23.982709    1522 start.go:159] libmachine.API.Create for "addons-388000" (driver="qemu2")
	I0914 14:36:23.982756    1522 client.go:168] LocalClient.Create starting
	I0914 14:36:23.982899    1522 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/ca.pem
	I0914 14:36:24.329911    1522 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/cert.pem
	I0914 14:36:24.425142    1522 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17243-1006/.minikube/cache/iso/arm64/minikube-v1.31.0-1694625400-17243-arm64.iso...
	I0914 14:36:24.784281    1522 main.go:141] libmachine: Creating SSH key...
	I0914 14:36:25.013863    1522 main.go:141] libmachine: Creating Disk image...
	I0914 14:36:25.013874    1522 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 14:36:25.014143    1522 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/disk.qcow2.raw /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/disk.qcow2
	I0914 14:36:25.048599    1522 main.go:141] libmachine: STDOUT: 
	I0914 14:36:25.048634    1522 main.go:141] libmachine: STDERR: 
	I0914 14:36:25.048701    1522 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/disk.qcow2 +20000M
	I0914 14:36:25.056105    1522 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 14:36:25.056122    1522 main.go:141] libmachine: STDERR: 
	I0914 14:36:25.056141    1522 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/disk.qcow2
	I0914 14:36:25.056150    1522 main.go:141] libmachine: Starting QEMU VM...
	I0914 14:36:25.056194    1522 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:ab:b1:c2:6f:25 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/disk.qcow2
	I0914 14:36:25.122275    1522 main.go:141] libmachine: STDOUT: 
	I0914 14:36:25.122322    1522 main.go:141] libmachine: STDERR: 
	I0914 14:36:25.122327    1522 main.go:141] libmachine: Attempt 0
	I0914 14:36:25.122346    1522 main.go:141] libmachine: Searching for fa:ab:b1:c2:6f:25 in /var/db/dhcpd_leases ...
	I0914 14:36:27.123500    1522 main.go:141] libmachine: Attempt 1
	I0914 14:36:27.123581    1522 main.go:141] libmachine: Searching for fa:ab:b1:c2:6f:25 in /var/db/dhcpd_leases ...
	I0914 14:36:29.124764    1522 main.go:141] libmachine: Attempt 2
	I0914 14:36:29.124788    1522 main.go:141] libmachine: Searching for fa:ab:b1:c2:6f:25 in /var/db/dhcpd_leases ...
	I0914 14:36:31.125900    1522 main.go:141] libmachine: Attempt 3
	I0914 14:36:31.125919    1522 main.go:141] libmachine: Searching for fa:ab:b1:c2:6f:25 in /var/db/dhcpd_leases ...
	I0914 14:36:33.126934    1522 main.go:141] libmachine: Attempt 4
	I0914 14:36:33.126945    1522 main.go:141] libmachine: Searching for fa:ab:b1:c2:6f:25 in /var/db/dhcpd_leases ...
	I0914 14:36:35.127988    1522 main.go:141] libmachine: Attempt 5
	I0914 14:36:35.128006    1522 main.go:141] libmachine: Searching for fa:ab:b1:c2:6f:25 in /var/db/dhcpd_leases ...
	I0914 14:36:37.130061    1522 main.go:141] libmachine: Attempt 6
	I0914 14:36:37.130089    1522 main.go:141] libmachine: Searching for fa:ab:b1:c2:6f:25 in /var/db/dhcpd_leases ...
	I0914 14:36:37.130226    1522 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0914 14:36:37.130272    1522 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:fa:ab:b1:c2:6f:25 ID:1,fa:ab:b1:c2:6f:25 Lease:0x6504ce64}
	I0914 14:36:37.130284    1522 main.go:141] libmachine: Found match: fa:ab:b1:c2:6f:25
	I0914 14:36:37.130296    1522 main.go:141] libmachine: IP: 192.168.105.2
	I0914 14:36:37.130304    1522 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.2)...
	I0914 14:36:39.152264    1522 machine.go:88] provisioning docker machine ...
	I0914 14:36:39.152328    1522 buildroot.go:166] provisioning hostname "addons-388000"
	I0914 14:36:39.153898    1522 main.go:141] libmachine: Using SSH client type: native
	I0914 14:36:39.154765    1522 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102894760] 0x102896ed0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0914 14:36:39.154789    1522 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-388000 && echo "addons-388000" | sudo tee /etc/hostname
	I0914 14:36:39.254406    1522 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-388000
	
	I0914 14:36:39.254547    1522 main.go:141] libmachine: Using SSH client type: native
	I0914 14:36:39.254974    1522 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102894760] 0x102896ed0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0914 14:36:39.254987    1522 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-388000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-388000/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-388000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 14:36:39.336783    1522 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 14:36:39.336807    1522 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17243-1006/.minikube CaCertPath:/Users/jenkins/minikube-integration/17243-1006/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17243-1006/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17243-1006/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17243-1006/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17243-1006/.minikube}
	I0914 14:36:39.336834    1522 buildroot.go:174] setting up certificates
	I0914 14:36:39.336842    1522 provision.go:83] configureAuth start
	I0914 14:36:39.336850    1522 provision.go:138] copyHostCerts
	I0914 14:36:39.337062    1522 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17243-1006/.minikube/ca.pem (1082 bytes)
	I0914 14:36:39.337458    1522 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17243-1006/.minikube/cert.pem (1123 bytes)
	I0914 14:36:39.337624    1522 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17243-1006/.minikube/key.pem (1675 bytes)
	I0914 14:36:39.337823    1522 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17243-1006/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17243-1006/.minikube/certs/ca-key.pem org=jenkins.addons-388000 san=[192.168.105.2 192.168.105.2 localhost 127.0.0.1 minikube addons-388000]
	I0914 14:36:39.438902    1522 provision.go:172] copyRemoteCerts
	I0914 14:36:39.438967    1522 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 14:36:39.438977    1522 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/id_rsa Username:docker}
	I0914 14:36:39.475382    1522 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0914 14:36:39.482935    1522 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0914 14:36:39.490611    1522 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0914 14:36:39.498058    1522 provision.go:86] duration metric: configureAuth took 161.21375ms
	I0914 14:36:39.498072    1522 buildroot.go:189] setting minikube options for container-runtime
	I0914 14:36:39.498194    1522 config.go:182] Loaded profile config "addons-388000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0914 14:36:39.498238    1522 main.go:141] libmachine: Using SSH client type: native
	I0914 14:36:39.498454    1522 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102894760] 0x102896ed0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0914 14:36:39.498461    1522 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0914 14:36:39.568371    1522 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0914 14:36:39.568380    1522 buildroot.go:70] root file system type: tmpfs
	I0914 14:36:39.568444    1522 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0914 14:36:39.568493    1522 main.go:141] libmachine: Using SSH client type: native
	I0914 14:36:39.568758    1522 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102894760] 0x102896ed0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0914 14:36:39.568795    1522 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0914 14:36:39.642658    1522 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0914 14:36:39.642714    1522 main.go:141] libmachine: Using SSH client type: native
	I0914 14:36:39.642984    1522 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102894760] 0x102896ed0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0914 14:36:39.642994    1522 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0914 14:36:40.018079    1522 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0914 14:36:40.018095    1522 machine.go:91] provisioned docker machine in 865.825208ms
	I0914 14:36:40.018101    1522 client.go:171] LocalClient.Create took 16.035747292s
	I0914 14:36:40.018112    1522 start.go:167] duration metric: libmachine.API.Create for "addons-388000" took 16.035815708s
	I0914 14:36:40.018117    1522 start.go:300] post-start starting for "addons-388000" (driver="qemu2")
	I0914 14:36:40.018121    1522 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 14:36:40.018186    1522 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 14:36:40.018197    1522 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/id_rsa Username:docker}
	I0914 14:36:40.056512    1522 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 14:36:40.057796    1522 info.go:137] Remote host: Buildroot 2021.02.12
	I0914 14:36:40.057807    1522 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17243-1006/.minikube/addons for local assets ...
	I0914 14:36:40.057875    1522 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17243-1006/.minikube/files for local assets ...
	I0914 14:36:40.057901    1522 start.go:303] post-start completed in 39.782666ms
	I0914 14:36:40.058218    1522 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/config.json ...
	I0914 14:36:40.058366    1522 start.go:128] duration metric: createHost completed in 16.404584042s
	I0914 14:36:40.058389    1522 main.go:141] libmachine: Using SSH client type: native
	I0914 14:36:40.058608    1522 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102894760] 0x102896ed0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0914 14:36:40.058612    1522 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0914 14:36:40.126242    1522 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694727400.596628044
	
	I0914 14:36:40.126252    1522 fix.go:206] guest clock: 1694727400.596628044
	I0914 14:36:40.126256    1522 fix.go:219] Guest: 2023-09-14 14:36:40.596628044 -0700 PDT Remote: 2023-09-14 14:36:40.058369 -0700 PDT m=+16.505601626 (delta=538.259044ms)
	I0914 14:36:40.126267    1522 fix.go:190] guest clock delta is within tolerance: 538.259044ms
	I0914 14:36:40.126272    1522 start.go:83] releasing machines lock for "addons-388000", held for 16.472537s
	I0914 14:36:40.126627    1522 ssh_runner.go:195] Run: cat /version.json
	I0914 14:36:40.126630    1522 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 14:36:40.126636    1522 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/id_rsa Username:docker}
	I0914 14:36:40.126680    1522 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/id_rsa Username:docker}
	I0914 14:36:40.164117    1522 ssh_runner.go:195] Run: systemctl --version
	I0914 14:36:40.279852    1522 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 14:36:40.282756    1522 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 14:36:40.282802    1522 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 14:36:40.290141    1522 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 14:36:40.290164    1522 start.go:469] detecting cgroup driver to use...
	I0914 14:36:40.290325    1522 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 14:36:40.298242    1522 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0914 14:36:40.302485    1522 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0914 14:36:40.306314    1522 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0914 14:36:40.306335    1522 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0914 14:36:40.309906    1522 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0914 14:36:40.313708    1522 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0914 14:36:40.317003    1522 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0914 14:36:40.319988    1522 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 14:36:40.323114    1522 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0914 14:36:40.326593    1522 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 14:36:40.329687    1522 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 14:36:40.332474    1522 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 14:36:40.414020    1522 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0914 14:36:40.421074    1522 start.go:469] detecting cgroup driver to use...
	I0914 14:36:40.421134    1522 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0914 14:36:40.426647    1522 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 14:36:40.431508    1522 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 14:36:40.437031    1522 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 14:36:40.441206    1522 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0914 14:36:40.445778    1522 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0914 14:36:40.494559    1522 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0914 14:36:40.500245    1522 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 14:36:40.506085    1522 ssh_runner.go:195] Run: which cri-dockerd
	I0914 14:36:40.507323    1522 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0914 14:36:40.510306    1522 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0914 14:36:40.515235    1522 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0914 14:36:40.590641    1522 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0914 14:36:40.670685    1522 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0914 14:36:40.670697    1522 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0914 14:36:40.676022    1522 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 14:36:40.753642    1522 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0914 14:36:41.915654    1522 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.162025209s)
	I0914 14:36:41.915719    1522 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0914 14:36:41.996165    1522 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0914 14:36:42.077673    1522 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0914 14:36:42.158787    1522 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 14:36:42.238393    1522 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0914 14:36:42.246223    1522 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 14:36:42.322653    1522 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0914 14:36:42.347035    1522 start.go:516] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0914 14:36:42.347147    1522 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0914 14:36:42.349276    1522 start.go:537] Will wait 60s for crictl version
	I0914 14:36:42.349310    1522 ssh_runner.go:195] Run: which crictl
	I0914 14:36:42.350645    1522 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 14:36:42.367912    1522 start.go:553] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.6
	RuntimeApiVersion:  v1alpha2
	I0914 14:36:42.367994    1522 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0914 14:36:42.377957    1522 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0914 14:36:42.394599    1522 out.go:204] * Preparing Kubernetes v1.28.1 on Docker 24.0.6 ...
	I0914 14:36:42.394744    1522 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0914 14:36:42.396150    1522 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 14:36:42.399678    1522 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0914 14:36:42.399720    1522 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0914 14:36:42.404754    1522 docker.go:636] Got preloaded images: 
	I0914 14:36:42.404761    1522 docker.go:642] registry.k8s.io/kube-apiserver:v1.28.1 wasn't preloaded
	I0914 14:36:42.404801    1522 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0914 14:36:42.407644    1522 ssh_runner.go:195] Run: which lz4
	I0914 14:36:42.408926    1522 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0914 14:36:42.410207    1522 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0914 14:36:42.410221    1522 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (356902558 bytes)
	I0914 14:36:43.758723    1522 docker.go:600] Took 1.349866 seconds to copy over tarball
	I0914 14:36:43.758788    1522 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0914 14:36:44.802481    1522 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.043706042s)
	I0914 14:36:44.802494    1522 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0914 14:36:44.818862    1522 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0914 14:36:44.822486    1522 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0914 14:36:44.827997    1522 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 14:36:44.904406    1522 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0914 14:36:47.070320    1522 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.165952375s)
	I0914 14:36:47.070426    1522 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0914 14:36:47.076673    1522 docker.go:636] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.1
	registry.k8s.io/kube-scheduler:v1.28.1
	registry.k8s.io/kube-proxy:v1.28.1
	registry.k8s.io/kube-controller-manager:v1.28.1
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0914 14:36:47.076684    1522 cache_images.go:84] Images are preloaded, skipping loading
	I0914 14:36:47.076750    1522 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0914 14:36:47.084410    1522 cni.go:84] Creating CNI manager for ""
	I0914 14:36:47.084420    1522 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0914 14:36:47.084443    1522 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0914 14:36:47.084452    1522 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.2 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-388000 NodeName:addons-388000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 14:36:47.084527    1522 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-388000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 14:36:47.084571    1522 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=addons-388000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:addons-388000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0914 14:36:47.084633    1522 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0914 14:36:47.087471    1522 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 14:36:47.087501    1522 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 14:36:47.090481    1522 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I0914 14:36:47.095702    1522 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 14:36:47.100584    1522 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2099 bytes)
	I0914 14:36:47.105532    1522 ssh_runner.go:195] Run: grep 192.168.105.2	control-plane.minikube.internal$ /etc/hosts
	I0914 14:36:47.106963    1522 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 14:36:47.110892    1522 certs.go:56] Setting up /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000 for IP: 192.168.105.2
	I0914 14:36:47.110903    1522 certs.go:190] acquiring lock for shared ca certs: {Name:mkd19d6e2143685b57ba1e0d43c4081bbdb26a79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 14:36:47.111053    1522 certs.go:204] generating minikubeCA CA: /Users/jenkins/minikube-integration/17243-1006/.minikube/ca.key
	I0914 14:36:47.228830    1522 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17243-1006/.minikube/ca.crt ...
	I0914 14:36:47.228840    1522 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17243-1006/.minikube/ca.crt: {Name:mk1c10f9290e336c983838c8c09bb8cd18a9a4c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 14:36:47.229095    1522 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17243-1006/.minikube/ca.key ...
	I0914 14:36:47.229099    1522 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17243-1006/.minikube/ca.key: {Name:mkbc669c78b9b93a07aa566669e7e92430fec9e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 14:36:47.229219    1522 certs.go:204] generating proxyClientCA CA: /Users/jenkins/minikube-integration/17243-1006/.minikube/proxy-client-ca.key
	I0914 14:36:47.333428    1522 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17243-1006/.minikube/proxy-client-ca.crt ...
	I0914 14:36:47.333432    1522 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17243-1006/.minikube/proxy-client-ca.crt: {Name:mk85d65dc023d08a0f4cb19cc395e69f12c9ed1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 14:36:47.333577    1522 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17243-1006/.minikube/proxy-client-ca.key ...
	I0914 14:36:47.333579    1522 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17243-1006/.minikube/proxy-client-ca.key: {Name:mk62bc08bafeee956e88b9480bac37c2df91bf30 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 14:36:47.333721    1522 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/client.key
	I0914 14:36:47.333730    1522 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/client.crt with IP's: []
	I0914 14:36:47.598337    1522 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/client.crt ...
	I0914 14:36:47.598352    1522 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/client.crt: {Name:mk8ecd4e838807718c7ef97bafd599d3b7fd1a4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 14:36:47.598702    1522 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/client.key ...
	I0914 14:36:47.598710    1522 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/client.key: {Name:mk3960bc5fb536243466f07f9f23680cfa92d826 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 14:36:47.598826    1522 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/apiserver.key.96055969
	I0914 14:36:47.598838    1522 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/apiserver.crt.96055969 with IP's: [192.168.105.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0914 14:36:47.656638    1522 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/apiserver.crt.96055969 ...
	I0914 14:36:47.656642    1522 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/apiserver.crt.96055969: {Name:mk3691ba24392ca70b8d7adb6c837bd5b52dfeeb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 14:36:47.656789    1522 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/apiserver.key.96055969 ...
	I0914 14:36:47.656792    1522 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/apiserver.key.96055969: {Name:mk7619af569a08784491e3a0055c754ead430eca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 14:36:47.656913    1522 certs.go:337] copying /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/apiserver.crt.96055969 -> /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/apiserver.crt
	I0914 14:36:47.657047    1522 certs.go:341] copying /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/apiserver.key.96055969 -> /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/apiserver.key
	I0914 14:36:47.657134    1522 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/proxy-client.key
	I0914 14:36:47.657146    1522 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/proxy-client.crt with IP's: []
	I0914 14:36:47.715161    1522 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/proxy-client.crt ...
	I0914 14:36:47.715165    1522 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/proxy-client.crt: {Name:mk5c5221c842b768f8e9ba880dc08acd610bf8dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 14:36:47.715298    1522 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/proxy-client.key ...
	I0914 14:36:47.715301    1522 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/proxy-client.key: {Name:mk620ca3f197a51ffd017e6711b4bab26fb15d79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 14:36:47.715560    1522 certs.go:437] found cert: /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/Users/jenkins/minikube-integration/17243-1006/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 14:36:47.715594    1522 certs.go:437] found cert: /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/Users/jenkins/minikube-integration/17243-1006/.minikube/certs/ca.pem (1082 bytes)
	I0914 14:36:47.715621    1522 certs.go:437] found cert: /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/Users/jenkins/minikube-integration/17243-1006/.minikube/certs/cert.pem (1123 bytes)
	I0914 14:36:47.715645    1522 certs.go:437] found cert: /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/Users/jenkins/minikube-integration/17243-1006/.minikube/certs/key.pem (1675 bytes)
	I0914 14:36:47.716027    1522 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0914 14:36:47.723894    1522 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0914 14:36:47.731037    1522 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 14:36:47.738379    1522 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0914 14:36:47.745927    1522 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17243-1006/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 14:36:47.752925    1522 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17243-1006/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0914 14:36:47.759542    1522 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17243-1006/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 14:36:47.766602    1522 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17243-1006/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0914 14:36:47.773763    1522 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17243-1006/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 14:36:47.780697    1522 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 14:36:47.786484    1522 ssh_runner.go:195] Run: openssl version
	I0914 14:36:47.788649    1522 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 14:36:47.791615    1522 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 14:36:47.793075    1522 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 14 21:36 /usr/share/ca-certificates/minikubeCA.pem
	I0914 14:36:47.793092    1522 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 14:36:47.794978    1522 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 14:36:47.798423    1522 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0914 14:36:47.799931    1522 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0914 14:36:47.799971    1522 kubeadm.go:404] StartCluster: {Name:addons-388000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.1 ClusterName:addons-388000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 14:36:47.800034    1522 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0914 14:36:47.805504    1522 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 14:36:47.808480    1522 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 14:36:47.811111    1522 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 14:36:47.814398    1522 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 14:36:47.814412    1522 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0914 14:36:47.835210    1522 kubeadm.go:322] [init] Using Kubernetes version: v1.28.1
	I0914 14:36:47.835254    1522 kubeadm.go:322] [preflight] Running pre-flight checks
	I0914 14:36:47.889698    1522 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 14:36:47.889750    1522 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 14:36:47.889794    1522 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0914 14:36:47.952261    1522 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 14:36:47.962464    1522 out.go:204]   - Generating certificates and keys ...
	I0914 14:36:47.962497    1522 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0914 14:36:47.962525    1522 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0914 14:36:48.025951    1522 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0914 14:36:48.134925    1522 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0914 14:36:48.186988    1522 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0914 14:36:48.299178    1522 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0914 14:36:48.429498    1522 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0914 14:36:48.429557    1522 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-388000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0914 14:36:48.510620    1522 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0914 14:36:48.510686    1522 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-388000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0914 14:36:48.631510    1522 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0914 14:36:48.668002    1522 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0914 14:36:48.726941    1522 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0914 14:36:48.726969    1522 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 14:36:48.823035    1522 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 14:36:48.918005    1522 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 14:36:49.052610    1522 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 14:36:49.136045    1522 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 14:36:49.136292    1522 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 14:36:49.138218    1522 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 14:36:49.141449    1522 out.go:204]   - Booting up control plane ...
	I0914 14:36:49.141518    1522 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 14:36:49.141563    1522 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 14:36:49.141596    1522 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 14:36:49.146098    1522 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 14:36:49.146527    1522 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 14:36:49.146584    1522 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0914 14:36:49.235726    1522 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0914 14:36:53.234480    1522 kubeadm.go:322] [apiclient] All control plane components are healthy after 4.002199 seconds
	I0914 14:36:53.234548    1522 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0914 14:36:53.240692    1522 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0914 14:36:53.748795    1522 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0914 14:36:53.748894    1522 kubeadm.go:322] [mark-control-plane] Marking the node addons-388000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0914 14:36:54.253997    1522 kubeadm.go:322] [bootstrap-token] Using token: v43sey.bixdamecwwaf1quf
	I0914 14:36:54.261418    1522 out.go:204]   - Configuring RBAC rules ...
	I0914 14:36:54.261475    1522 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0914 14:36:54.262616    1522 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0914 14:36:54.269041    1522 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0914 14:36:54.270041    1522 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0914 14:36:54.271028    1522 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0914 14:36:54.272209    1522 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0914 14:36:54.276273    1522 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0914 14:36:54.432396    1522 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0914 14:36:54.665469    1522 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0914 14:36:54.665894    1522 kubeadm.go:322] 
	I0914 14:36:54.665937    1522 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0914 14:36:54.665940    1522 kubeadm.go:322] 
	I0914 14:36:54.665992    1522 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0914 14:36:54.665996    1522 kubeadm.go:322] 
	I0914 14:36:54.666008    1522 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0914 14:36:54.666036    1522 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0914 14:36:54.666071    1522 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0914 14:36:54.666074    1522 kubeadm.go:322] 
	I0914 14:36:54.666099    1522 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0914 14:36:54.666101    1522 kubeadm.go:322] 
	I0914 14:36:54.666123    1522 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0914 14:36:54.666126    1522 kubeadm.go:322] 
	I0914 14:36:54.666148    1522 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0914 14:36:54.666182    1522 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0914 14:36:54.666217    1522 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0914 14:36:54.666220    1522 kubeadm.go:322] 
	I0914 14:36:54.666261    1522 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0914 14:36:54.666306    1522 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0914 14:36:54.666308    1522 kubeadm.go:322] 
	I0914 14:36:54.666396    1522 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token v43sey.bixdamecwwaf1quf \
	I0914 14:36:54.666457    1522 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:64a56fc8b6798e154cd69d815313de8f6cddb906dc6093ac20ba940ac7c1d871 \
	I0914 14:36:54.666472    1522 kubeadm.go:322] 	--control-plane 
	I0914 14:36:54.666475    1522 kubeadm.go:322] 
	I0914 14:36:54.666513    1522 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0914 14:36:54.666517    1522 kubeadm.go:322] 
	I0914 14:36:54.666553    1522 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token v43sey.bixdamecwwaf1quf \
	I0914 14:36:54.666621    1522 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:64a56fc8b6798e154cd69d815313de8f6cddb906dc6093ac20ba940ac7c1d871 
	I0914 14:36:54.666672    1522 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 14:36:54.666677    1522 cni.go:84] Creating CNI manager for ""
	I0914 14:36:54.666685    1522 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0914 14:36:54.674398    1522 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 14:36:54.677531    1522 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 14:36:54.681843    1522 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0914 14:36:54.686762    1522 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 14:36:54.686820    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:36:54.686837    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=d7492f2ae2d9b6e62b385ffcd97ebad62c645e82 minikube.k8s.io/name=addons-388000 minikube.k8s.io/updated_at=2023_09_14T14_36_54_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:36:54.745761    1522 ops.go:34] apiserver oom_adj: -16
	I0914 14:36:54.751811    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:36:54.783862    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:36:55.319135    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:36:55.819146    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:36:56.319044    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:36:56.817396    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:36:57.317676    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:36:57.819036    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:36:58.319007    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:36:58.819025    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:36:59.318963    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:36:59.819032    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:37:00.318959    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:37:00.819007    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:37:01.318925    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:37:01.819004    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:37:02.318900    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:37:02.818938    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:37:03.318896    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:37:03.818843    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:37:04.318914    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:37:04.818824    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:37:05.318789    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:37:05.818890    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:37:06.318784    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:37:06.818791    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:37:07.318787    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:37:07.357143    1522 kubeadm.go:1081] duration metric: took 12.670689708s to wait for elevateKubeSystemPrivileges.
	I0914 14:37:07.357158    1522 kubeadm.go:406] StartCluster complete in 19.557685291s
	I0914 14:37:07.357184    1522 settings.go:142] acquiring lock: {Name:mkcccc97e247e7e1b2e556ccc64336c05a92af53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 14:37:07.357360    1522 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/17243-1006/kubeconfig
	I0914 14:37:07.357606    1522 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17243-1006/kubeconfig: {Name:mkeec13fc5a79792669e9cedabfbe21efeb27d3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 14:37:07.357803    1522 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0914 14:37:07.357856    1522 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:true]
	I0914 14:37:07.357902    1522 addons.go:69] Setting volumesnapshots=true in profile "addons-388000"
	I0914 14:37:07.357909    1522 addons.go:231] Setting addon volumesnapshots=true in "addons-388000"
	I0914 14:37:07.357912    1522 addons.go:69] Setting ingress=true in profile "addons-388000"
	I0914 14:37:07.357919    1522 addons.go:231] Setting addon ingress=true in "addons-388000"
	I0914 14:37:07.357926    1522 host.go:66] Checking if "addons-388000" exists ...
	I0914 14:37:07.357934    1522 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-388000"
	I0914 14:37:07.357942    1522 host.go:66] Checking if "addons-388000" exists ...
	I0914 14:37:07.357951    1522 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-388000"
	I0914 14:37:07.357967    1522 host.go:66] Checking if "addons-388000" exists ...
	I0914 14:37:07.357975    1522 addons.go:69] Setting ingress-dns=true in profile "addons-388000"
	I0914 14:37:07.357985    1522 addons.go:69] Setting metrics-server=true in profile "addons-388000"
	I0914 14:37:07.358004    1522 addons.go:231] Setting addon ingress-dns=true in "addons-388000"
	I0914 14:37:07.358008    1522 addons.go:231] Setting addon metrics-server=true in "addons-388000"
	I0914 14:37:07.358046    1522 host.go:66] Checking if "addons-388000" exists ...
	I0914 14:37:07.358051    1522 host.go:66] Checking if "addons-388000" exists ...
	I0914 14:37:07.358066    1522 addons.go:69] Setting inspektor-gadget=true in profile "addons-388000"
	I0914 14:37:07.358074    1522 addons.go:231] Setting addon inspektor-gadget=true in "addons-388000"
	I0914 14:37:07.358086    1522 host.go:66] Checking if "addons-388000" exists ...
	I0914 14:37:07.358133    1522 config.go:182] Loaded profile config "addons-388000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0914 14:37:07.358210    1522 addons.go:69] Setting registry=true in profile "addons-388000"
	I0914 14:37:07.358222    1522 addons.go:231] Setting addon registry=true in "addons-388000"
	I0914 14:37:07.358259    1522 addons.go:69] Setting cloud-spanner=true in profile "addons-388000"
	I0914 14:37:07.358263    1522 addons.go:69] Setting default-storageclass=true in profile "addons-388000"
	I0914 14:37:07.358265    1522 addons.go:231] Setting addon cloud-spanner=true in "addons-388000"
	I0914 14:37:07.358266    1522 host.go:66] Checking if "addons-388000" exists ...
	I0914 14:37:07.358273    1522 addons.go:69] Setting storage-provisioner=true in profile "addons-388000"
	I0914 14:37:07.358276    1522 host.go:66] Checking if "addons-388000" exists ...
	I0914 14:37:07.358278    1522 addons.go:231] Setting addon storage-provisioner=true in "addons-388000"
	I0914 14:37:07.358289    1522 host.go:66] Checking if "addons-388000" exists ...
	W0914 14:37:07.358332    1522 host.go:54] host status for "addons-388000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/monitor: connect: connection refused
	W0914 14:37:07.358339    1522 addons.go:277] "addons-388000" is not running, setting inspektor-gadget=true and skipping enablement (err=<nil>)
	W0914 14:37:07.358450    1522 host.go:54] host status for "addons-388000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/monitor: connect: connection refused
	W0914 14:37:07.358453    1522 addons.go:277] "addons-388000" is not running, setting volumesnapshots=true and skipping enablement (err=<nil>)
	W0914 14:37:07.358483    1522 host.go:54] host status for "addons-388000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/monitor: connect: connection refused
	W0914 14:37:07.358489    1522 addons.go:277] "addons-388000" is not running, setting cloud-spanner=true and skipping enablement (err=<nil>)
	W0914 14:37:07.358257    1522 host.go:54] host status for "addons-388000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/monitor: connect: connection refused
	W0914 14:37:07.358494    1522 addons.go:277] "addons-388000" is not running, setting ingress=true and skipping enablement (err=<nil>)
	I0914 14:37:07.358496    1522 addons.go:467] Verifying addon ingress=true in "addons-388000"
	W0914 14:37:07.358500    1522 host.go:54] host status for "addons-388000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/monitor: connect: connection refused
	W0914 14:37:07.358504    1522 addons.go:277] "addons-388000" is not running, setting storage-provisioner=true and skipping enablement (err=<nil>)
	I0914 14:37:07.363429    1522 out.go:177] * Verifying ingress addon...
	I0914 14:37:07.358269    1522 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-388000"
	W0914 14:37:07.358528    1522 host.go:54] host status for "addons-388000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/monitor: connect: connection refused
	I0914 14:37:07.358271    1522 addons.go:69] Setting gcp-auth=true in profile "addons-388000"
	W0914 14:37:07.358722    1522 host.go:54] host status for "addons-388000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/monitor: connect: connection refused
	W0914 14:37:07.370487    1522 addons.go:277] "addons-388000" is not running, setting ingress-dns=true and skipping enablement (err=<nil>)
	I0914 14:37:07.370528    1522 mustload.go:65] Loading cluster: addons-388000
	W0914 14:37:07.370533    1522 addons.go:277] "addons-388000" is not running, setting registry=true and skipping enablement (err=<nil>)
	I0914 14:37:07.370877    1522 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0914 14:37:07.371899    1522 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-388000" context rescaled to 1 replicas
	I0914 14:37:07.372685    1522 addons.go:231] Setting addon default-storageclass=true in "addons-388000"
	I0914 14:37:07.374445    1522 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0914 14:37:07.377503    1522 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I0914 14:37:07.377530    1522 addons.go:467] Verifying addon registry=true in "addons-388000"
	I0914 14:37:07.377544    1522 host.go:66] Checking if "addons-388000" exists ...
	I0914 14:37:07.377592    1522 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 14:37:07.377611    1522 config.go:182] Loaded profile config "addons-388000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0914 14:37:07.379668    1522 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0914 14:37:07.387418    1522 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0914 14:37:07.384502    1522 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0914 14:37:07.385215    1522 host.go:66] Checking if "addons-388000" exists ...
	I0914 14:37:07.385566    1522 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 14:37:07.399476    1522 out.go:177] * Verifying Kubernetes components...
	I0914 14:37:07.399484    1522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0914 14:37:07.405519    1522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 14:37:07.405539    1522 out.go:177] * Verifying registry addon...
	I0914 14:37:07.409300    1522 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0914 14:37:07.413413    1522 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0914 14:37:07.409310    1522 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/id_rsa Username:docker}
	I0914 14:37:07.409318    1522 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/id_rsa Username:docker}
	I0914 14:37:07.413772    1522 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0914 14:37:07.421473    1522 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0914 14:37:07.425266    1522 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 14:37:07.434436    1522 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0914 14:37:07.437375    1522 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=registry
	I0914 14:37:07.438456    1522 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0914 14:37:07.450436    1522 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0914 14:37:07.460462    1522 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0914 14:37:07.463476    1522 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0914 14:37:07.463485    1522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0914 14:37:07.463494    1522 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/id_rsa Username:docker}
	I0914 14:37:07.497507    1522 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0914 14:37:07.497516    1522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0914 14:37:07.503780    1522 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0914 14:37:07.503787    1522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0914 14:37:07.509075    1522 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 14:37:07.509081    1522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0914 14:37:07.516870    1522 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 14:37:07.522898    1522 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 14:37:07.539508    1522 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0914 14:37:07.539521    1522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0914 14:37:07.591865    1522 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0914 14:37:07.591879    1522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0914 14:37:07.635732    1522 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0914 14:37:07.635742    1522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0914 14:37:07.644322    1522 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0914 14:37:07.644333    1522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0914 14:37:07.649557    1522 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0914 14:37:07.649568    1522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0914 14:37:07.681313    1522 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0914 14:37:07.681325    1522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0914 14:37:07.685931    1522 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0914 14:37:07.685936    1522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0914 14:37:07.690914    1522 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0914 14:37:07.690921    1522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0914 14:37:07.695920    1522 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0914 14:37:07.695926    1522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0914 14:37:07.700851    1522 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0914 14:37:07.700856    1522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0914 14:37:07.705677    1522 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0914 14:37:08.213892    1522 start.go:917] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I0914 14:37:08.214323    1522 node_ready.go:35] waiting up to 6m0s for node "addons-388000" to be "Ready" ...
	I0914 14:37:08.215929    1522 node_ready.go:49] node "addons-388000" has status "Ready":"True"
	I0914 14:37:08.215948    1522 node_ready.go:38] duration metric: took 1.599458ms waiting for node "addons-388000" to be "Ready" ...
	I0914 14:37:08.215953    1522 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 14:37:08.218780    1522 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-6php8" in "kube-system" namespace to be "Ready" ...
	I0914 14:37:08.378405    1522 addons.go:467] Verifying addon metrics-server=true in "addons-388000"
	I0914 14:37:08.878056    1522 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.172383083s)
	I0914 14:37:08.878074    1522 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-388000"
	I0914 14:37:08.882346    1522 out.go:177] * Verifying csi-hostpath-driver addon...
	I0914 14:37:08.892719    1522 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0914 14:37:08.895508    1522 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0914 14:37:08.895515    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:08.901644    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:09.404389    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:09.734233    1522 pod_ready.go:97] error getting pod "coredns-5dd5756b68-6php8" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-6php8" not found
	I0914 14:37:09.734244    1522 pod_ready.go:81] duration metric: took 1.515495542s waiting for pod "coredns-5dd5756b68-6php8" in "kube-system" namespace to be "Ready" ...
	E0914 14:37:09.734250    1522 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-6php8" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-6php8" not found
	I0914 14:37:09.734253    1522 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-psn28" in "kube-system" namespace to be "Ready" ...
	I0914 14:37:09.736576    1522 pod_ready.go:92] pod "coredns-5dd5756b68-psn28" in "kube-system" namespace has status "Ready":"True"
	I0914 14:37:09.736583    1522 pod_ready.go:81] duration metric: took 2.327542ms waiting for pod "coredns-5dd5756b68-psn28" in "kube-system" namespace to be "Ready" ...
	I0914 14:37:09.736588    1522 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-388000" in "kube-system" namespace to be "Ready" ...
	I0914 14:37:09.739033    1522 pod_ready.go:92] pod "etcd-addons-388000" in "kube-system" namespace has status "Ready":"True"
	I0914 14:37:09.739038    1522 pod_ready.go:81] duration metric: took 2.447792ms waiting for pod "etcd-addons-388000" in "kube-system" namespace to be "Ready" ...
	I0914 14:37:09.739041    1522 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-388000" in "kube-system" namespace to be "Ready" ...
	I0914 14:37:09.741595    1522 pod_ready.go:92] pod "kube-apiserver-addons-388000" in "kube-system" namespace has status "Ready":"True"
	I0914 14:37:09.741601    1522 pod_ready.go:81] duration metric: took 2.556083ms waiting for pod "kube-apiserver-addons-388000" in "kube-system" namespace to be "Ready" ...
	I0914 14:37:09.741605    1522 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-388000" in "kube-system" namespace to be "Ready" ...
	I0914 14:37:09.904411    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:10.016583    1522 pod_ready.go:92] pod "kube-controller-manager-addons-388000" in "kube-system" namespace has status "Ready":"True"
	I0914 14:37:10.016591    1522 pod_ready.go:81] duration metric: took 274.98975ms waiting for pod "kube-controller-manager-addons-388000" in "kube-system" namespace to be "Ready" ...
	I0914 14:37:10.016595    1522 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8pbsf" in "kube-system" namespace to be "Ready" ...
	I0914 14:37:10.404994    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:10.417030    1522 pod_ready.go:92] pod "kube-proxy-8pbsf" in "kube-system" namespace has status "Ready":"True"
	I0914 14:37:10.417036    1522 pod_ready.go:81] duration metric: took 400.447833ms waiting for pod "kube-proxy-8pbsf" in "kube-system" namespace to be "Ready" ...
	I0914 14:37:10.417041    1522 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-388000" in "kube-system" namespace to be "Ready" ...
	I0914 14:37:10.816814    1522 pod_ready.go:92] pod "kube-scheduler-addons-388000" in "kube-system" namespace has status "Ready":"True"
	I0914 14:37:10.816823    1522 pod_ready.go:81] duration metric: took 399.789417ms waiting for pod "kube-scheduler-addons-388000" in "kube-system" namespace to be "Ready" ...
	I0914 14:37:10.816827    1522 pod_ready.go:38] duration metric: took 2.600935083s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 14:37:10.816835    1522 api_server.go:52] waiting for apiserver process to appear ...
	I0914 14:37:10.816886    1522 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 14:37:10.821727    1522 api_server.go:72] duration metric: took 3.437324417s to wait for apiserver process to appear ...
	I0914 14:37:10.821733    1522 api_server.go:88] waiting for apiserver healthz status ...
	I0914 14:37:10.821738    1522 api_server.go:253] Checking apiserver healthz at https://192.168.105.2:8443/healthz ...
	I0914 14:37:10.825342    1522 api_server.go:279] https://192.168.105.2:8443/healthz returned 200:
	ok
	I0914 14:37:10.826107    1522 api_server.go:141] control plane version: v1.28.1
	I0914 14:37:10.826114    1522 api_server.go:131] duration metric: took 4.378333ms to wait for apiserver health ...
	I0914 14:37:10.826117    1522 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 14:37:10.904363    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:11.018876    1522 system_pods.go:59] 10 kube-system pods found
	I0914 14:37:11.018886    1522 system_pods.go:61] "coredns-5dd5756b68-psn28" [50c0e128-9a93-456c-83af-dfbcda64eaa4] Running
	I0914 14:37:11.018891    1522 system_pods.go:61] "csi-hostpath-attacher-0" [29be2dba-12b9-4442-8c83-8d24fd054a90] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0914 14:37:11.018894    1522 system_pods.go:61] "csi-hostpath-resizer-0" [11fcc7a2-d176-442f-9cd6-04668da8d423] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0914 14:37:11.018898    1522 system_pods.go:61] "csi-hostpathplugin-b5k2m" [aa03259b-6f1a-4537-95f6-47e8cf8fcc39] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0914 14:37:11.018909    1522 system_pods.go:61] "etcd-addons-388000" [b91a9e99-fb10-4340-977e-536225df8415] Running
	I0914 14:37:11.018914    1522 system_pods.go:61] "kube-apiserver-addons-388000" [43fed39d-32f3-4b45-b43c-d9918758a66c] Running
	I0914 14:37:11.018917    1522 system_pods.go:61] "kube-controller-manager-addons-388000" [31eb0c68-03ca-4907-921b-14ccef970edf] Running
	I0914 14:37:11.018920    1522 system_pods.go:61] "kube-proxy-8pbsf" [e9d3ab50-7594-4360-8226-d37e954aca6e] Running
	I0914 14:37:11.018923    1522 system_pods.go:61] "kube-scheduler-addons-388000" [d931a34d-1c14-4544-80cd-ce847a1f1af8] Running
	I0914 14:37:11.018927    1522 system_pods.go:61] "metrics-server-7c66d45ddc-dtj78" [7b539063-f45b-4a15-97e7-6713ea57e519] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 14:37:11.018932    1522 system_pods.go:74] duration metric: took 192.817125ms to wait for pod list to return data ...
	I0914 14:37:11.018935    1522 default_sa.go:34] waiting for default service account to be created ...
	I0914 14:37:11.216117    1522 default_sa.go:45] found service account: "default"
	I0914 14:37:11.216127    1522 default_sa.go:55] duration metric: took 197.1925ms for default service account to be created ...
	I0914 14:37:11.216130    1522 system_pods.go:116] waiting for k8s-apps to be running ...
	I0914 14:37:11.404125    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:11.419144    1522 system_pods.go:86] 10 kube-system pods found
	I0914 14:37:11.419151    1522 system_pods.go:89] "coredns-5dd5756b68-psn28" [50c0e128-9a93-456c-83af-dfbcda64eaa4] Running
	I0914 14:37:11.419155    1522 system_pods.go:89] "csi-hostpath-attacher-0" [29be2dba-12b9-4442-8c83-8d24fd054a90] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0914 14:37:11.419158    1522 system_pods.go:89] "csi-hostpath-resizer-0" [11fcc7a2-d176-442f-9cd6-04668da8d423] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0914 14:37:11.419163    1522 system_pods.go:89] "csi-hostpathplugin-b5k2m" [aa03259b-6f1a-4537-95f6-47e8cf8fcc39] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0914 14:37:11.419167    1522 system_pods.go:89] "etcd-addons-388000" [b91a9e99-fb10-4340-977e-536225df8415] Running
	I0914 14:37:11.419169    1522 system_pods.go:89] "kube-apiserver-addons-388000" [43fed39d-32f3-4b45-b43c-d9918758a66c] Running
	I0914 14:37:11.419176    1522 system_pods.go:89] "kube-controller-manager-addons-388000" [31eb0c68-03ca-4907-921b-14ccef970edf] Running
	I0914 14:37:11.419178    1522 system_pods.go:89] "kube-proxy-8pbsf" [e9d3ab50-7594-4360-8226-d37e954aca6e] Running
	I0914 14:37:11.419180    1522 system_pods.go:89] "kube-scheduler-addons-388000" [d931a34d-1c14-4544-80cd-ce847a1f1af8] Running
	I0914 14:37:11.419183    1522 system_pods.go:89] "metrics-server-7c66d45ddc-dtj78" [7b539063-f45b-4a15-97e7-6713ea57e519] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 14:37:11.419189    1522 system_pods.go:126] duration metric: took 203.059ms to wait for k8s-apps to be running ...
	I0914 14:37:11.419193    1522 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 14:37:11.419242    1522 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 14:37:11.424702    1522 system_svc.go:56] duration metric: took 5.506625ms WaitForService to wait for kubelet.
	I0914 14:37:11.424708    1522 kubeadm.go:581] duration metric: took 4.040322208s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0914 14:37:11.424718    1522 node_conditions.go:102] verifying NodePressure condition ...
	I0914 14:37:11.616510    1522 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0914 14:37:11.616524    1522 node_conditions.go:123] node cpu capacity is 2
	I0914 14:37:11.616531    1522 node_conditions.go:105] duration metric: took 191.81375ms to run NodePressure ...
	I0914 14:37:11.616536    1522 start.go:228] waiting for startup goroutines ...
	I0914 14:37:11.904062    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:12.404356    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:12.904283    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:13.404719    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:13.905195    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:14.010940    1522 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0914 14:37:14.010958    1522 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/id_rsa Username:docker}
	I0914 14:37:14.050416    1522 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0914 14:37:14.056158    1522 addons.go:231] Setting addon gcp-auth=true in "addons-388000"
	I0914 14:37:14.056180    1522 host.go:66] Checking if "addons-388000" exists ...
	I0914 14:37:14.056914    1522 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0914 14:37:14.056921    1522 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/id_rsa Username:docker}
	I0914 14:37:14.098984    1522 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0914 14:37:14.102963    1522 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0914 14:37:14.106843    1522 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0914 14:37:14.106851    1522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0914 14:37:14.112250    1522 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0914 14:37:14.112259    1522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0914 14:37:14.117057    1522 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0914 14:37:14.117063    1522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5412 bytes)
	I0914 14:37:14.122524    1522 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0914 14:37:14.407542    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:14.453711    1522 addons.go:467] Verifying addon gcp-auth=true in "addons-388000"
	I0914 14:37:14.458827    1522 out.go:177] * Verifying gcp-auth addon...
	I0914 14:37:14.469206    1522 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0914 14:37:14.473873    1522 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0914 14:37:14.473883    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:14.477552    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:14.905449    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:14.981028    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:15.404241    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:15.481017    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:15.904406    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:15.981050    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:16.404161    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:16.481356    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:16.904348    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:16.980852    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:17.404432    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:17.480937    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:17.904061    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:17.980969    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:18.404491    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:18.481031    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:18.904020    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:18.981054    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:19.405323    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:19.480019    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:19.904276    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:19.980839    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:20.404204    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:20.481250    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:20.904037    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:20.981407    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:21.404239    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:21.481248    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:21.904261    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:21.981109    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:22.405094    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:22.481049    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:22.904407    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:22.981227    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:23.404066    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:23.480779    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:23.904000    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:23.980955    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:24.404182    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:24.480903    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:24.904034    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:24.980896    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:25.403993    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:25.480949    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:25.903717    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:25.981591    1522 kapi.go:107] duration metric: took 11.512675166s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0914 14:37:25.985811    1522 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-388000 cluster.
	I0914 14:37:25.990747    1522 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0914 14:37:25.993661    1522 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0914 14:37:26.404089    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:26.904132    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:27.405664    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:27.903941    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:28.403884    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:28.903901    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:29.404487    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:29.903852    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:30.404685    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:30.903890    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:31.403753    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:31.903926    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:32.404318    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:32.903835    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:33.403834    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:33.903687    1522 kapi.go:107] duration metric: took 25.011601375s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0914 14:43:07.370409    1522 kapi.go:107] duration metric: took 6m0.008648916s to wait for app.kubernetes.io/name=ingress-nginx ...
	W0914 14:43:07.370479    1522 out.go:239] ! Enabling 'ingress' returned an error: running callbacks: [waiting for app.kubernetes.io/name=ingress-nginx pods: context deadline exceeded]
	I0914 14:43:07.418192    1522 kapi.go:107] duration metric: took 6m0.013534334s to wait for kubernetes.io/minikube-addons=registry ...
	W0914 14:43:07.418227    1522 out.go:239] ! Enabling 'registry' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=registry pods: context deadline exceeded]
	I0914 14:43:07.425587    1522 out.go:177] * Enabled addons: inspektor-gadget, volumesnapshots, cloud-spanner, storage-provisioner, ingress-dns, default-storageclass, metrics-server, gcp-auth, csi-hostpath-driver
	I0914 14:43:07.433636    1522 addons.go:502] enable addons completed in 6m0.084906709s: enabled=[inspektor-gadget volumesnapshots cloud-spanner storage-provisioner ingress-dns default-storageclass metrics-server gcp-auth csi-hostpath-driver]
	I0914 14:43:07.433650    1522 start.go:233] waiting for cluster config update ...
	I0914 14:43:07.433664    1522 start.go:242] writing updated cluster config ...
	I0914 14:43:07.433996    1522 ssh_runner.go:195] Run: rm -f paused
	I0914 14:43:07.464084    1522 start.go:600] kubectl: 1.27.2, cluster: 1.28.1 (minor skew: 1)
	I0914 14:43:07.467672    1522 out.go:177] * Done! kubectl is now configured to use "addons-388000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Thu 2023-09-14 21:36:36 UTC, ends at Thu 2023-09-14 21:55:07 UTC. --
	Sep 14 21:37:25 addons-388000 dockerd[1162]: time="2023-09-14T21:37:25.328348115Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 14 21:37:25 addons-388000 dockerd[1162]: time="2023-09-14T21:37:25.328354156Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 14 21:37:27 addons-388000 cri-dockerd[1056]: time="2023-09-14T21:37:27Z" level=info msg="Stop pulling image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5: Status: Downloaded newer image for registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5"
	Sep 14 21:37:27 addons-388000 dockerd[1162]: time="2023-09-14T21:37:27.377915116Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 14 21:37:27 addons-388000 dockerd[1162]: time="2023-09-14T21:37:27.377949991Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 14 21:37:27 addons-388000 dockerd[1162]: time="2023-09-14T21:37:27.377964407Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 14 21:37:27 addons-388000 dockerd[1162]: time="2023-09-14T21:37:27.378048407Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 14 21:37:27 addons-388000 dockerd[1156]: time="2023-09-14T21:37:27.472826657Z" level=warning msg="reference for unknown type: " digest="sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0" remote="registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0"
	Sep 14 21:37:28 addons-388000 cri-dockerd[1056]: time="2023-09-14T21:37:28Z" level=info msg="Stop pulling image registry.k8s.io/sig-storage/livenessprobe:v2.8.0@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0: Status: Downloaded newer image for registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0"
	Sep 14 21:37:28 addons-388000 dockerd[1162]: time="2023-09-14T21:37:28.601133366Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 14 21:37:28 addons-388000 dockerd[1162]: time="2023-09-14T21:37:28.601186991Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 14 21:37:28 addons-388000 dockerd[1162]: time="2023-09-14T21:37:28.601201491Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 14 21:37:28 addons-388000 dockerd[1162]: time="2023-09-14T21:37:28.601212200Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 14 21:37:28 addons-388000 dockerd[1156]: time="2023-09-14T21:37:28.692071408Z" level=warning msg="reference for unknown type: " digest="sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8" remote="registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8"
	Sep 14 21:37:31 addons-388000 cri-dockerd[1056]: time="2023-09-14T21:37:31Z" level=info msg="Stop pulling image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8: Status: Downloaded newer image for registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8"
	Sep 14 21:37:31 addons-388000 dockerd[1162]: time="2023-09-14T21:37:31.232372201Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 14 21:37:31 addons-388000 dockerd[1162]: time="2023-09-14T21:37:31.232402326Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 14 21:37:31 addons-388000 dockerd[1162]: time="2023-09-14T21:37:31.232412909Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 14 21:37:31 addons-388000 dockerd[1162]: time="2023-09-14T21:37:31.232417493Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 14 21:37:31 addons-388000 dockerd[1156]: time="2023-09-14T21:37:31.325578326Z" level=warning msg="reference for unknown type: " digest="sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f" remote="registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f"
	Sep 14 21:37:33 addons-388000 cri-dockerd[1056]: time="2023-09-14T21:37:33Z" level=info msg="Stop pulling image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f: Status: Downloaded newer image for registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f"
	Sep 14 21:37:33 addons-388000 dockerd[1162]: time="2023-09-14T21:37:33.503964160Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 14 21:37:33 addons-388000 dockerd[1162]: time="2023-09-14T21:37:33.503991702Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 14 21:37:33 addons-388000 dockerd[1162]: time="2023-09-14T21:37:33.504000744Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 14 21:37:33 addons-388000 dockerd[1162]: time="2023-09-14T21:37:33.504006994Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID
	c6e7158ec87e6       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          17 minutes ago      Running             csi-snapshotter                          0                   23a9864c5e7a2
	8fbd96f503108       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          17 minutes ago      Running             csi-provisioner                          0                   23a9864c5e7a2
	5a28f3666ec4d       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            17 minutes ago      Running             liveness-probe                           0                   23a9864c5e7a2
	4a515f3dbd90e       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           17 minutes ago      Running             hostpath                                 0                   23a9864c5e7a2
	726bdbe627b06       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf                                 17 minutes ago      Running             gcp-auth                                 0                   039c490b8ce95
	c5e816aa3fb60       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                17 minutes ago      Running             node-driver-registrar                    0                   23a9864c5e7a2
	0574ef72c784a       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              17 minutes ago      Running             csi-resizer                              0                   928188ebbbe5c
	0af4f9c858980       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   17 minutes ago      Running             csi-external-health-monitor-controller   0                   23a9864c5e7a2
	9a3fe3bf72dd7       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             17 minutes ago      Running             csi-attacher                             0                   aec96cfd028be
	e99b5961a5b90       registry.k8s.io/metrics-server/metrics-server@sha256:ee4304963fb035239bb5c5e8c10f2f38ee80efc16ecbdb9feb7213c17ae2e86e                        17 minutes ago      Running             metrics-server                           0                   40cdfd9e591d6
	1f519e69776da       97e04611ad434                                                                                                                                17 minutes ago      Running             coredns                                  0                   6b82b02e01da4
	c36ca5fc76214       812f5241df7fd                                                                                                                                17 minutes ago      Running             kube-proxy                               0                   24118a5be8efa
	af45960dc2d7c       b4a5a57e99492                                                                                                                                18 minutes ago      Running             kube-scheduler                           0                   6dde63050aa99
	39f78945ed576       b29fb62480892                                                                                                                                18 minutes ago      Running             kube-apiserver                           0                   a02ab403a50ec
	f2717f532e595       8b6e1980b7584                                                                                                                                18 minutes ago      Running             kube-controller-manager                  0                   834af4f99b3bc
	5a63d0e8296f4       9cdd6470f48c8                                                                                                                                18 minutes ago      Running             etcd                                     0                   b2289ff5c077b
	
	* 
	* ==> coredns [1f519e69776d] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	[INFO] Reloading complete
	
	* 
	* ==> describe nodes <==
	* Name:               addons-388000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-388000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d7492f2ae2d9b6e62b385ffcd97ebad62c645e82
	                    minikube.k8s.io/name=addons-388000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_14T14_36_54_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-388000
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-388000"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 14 Sep 2023 21:36:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-388000
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 14 Sep 2023 21:55:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 14 Sep 2023 21:53:15 +0000   Thu, 14 Sep 2023 21:36:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 14 Sep 2023 21:53:15 +0000   Thu, 14 Sep 2023 21:36:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 14 Sep 2023 21:53:15 +0000   Thu, 14 Sep 2023 21:36:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 14 Sep 2023 21:53:15 +0000   Thu, 14 Sep 2023 21:36:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.2
	  Hostname:    addons-388000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	System Info:
	  Machine ID:                 ca8cf67d46214b1fbc59c14cf3d2d66f
	  System UUID:                ca8cf67d46214b1fbc59c14cf3d2d66f
	  Boot ID:                    386c1075-3226-461a-ab43-e16ad465a6c4
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.6
	  Kubelet Version:            v1.28.1
	  Kube-Proxy Version:         v1.28.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  gcp-auth                    gcp-auth-d4c87556c-pjjjl                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 coredns-5dd5756b68-psn28                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     17m
	  kube-system                 csi-hostpath-attacher-0                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 csi-hostpath-resizer-0                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 csi-hostpathplugin-b5k2m                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 etcd-addons-388000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         18m
	  kube-system                 kube-apiserver-addons-388000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-controller-manager-addons-388000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-proxy-8pbsf                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-scheduler-addons-388000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 metrics-server-7c66d45ddc-dtj78          100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (5%!)(MISSING)       0 (0%!)(MISSING)         17m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (9%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 17m   kube-proxy       
	  Normal  Starting                 18m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  18m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  18m   kubelet          Node addons-388000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m   kubelet          Node addons-388000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m   kubelet          Node addons-388000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                18m   kubelet          Node addons-388000 status is now: NodeReady
	  Normal  RegisteredNode           18m   node-controller  Node addons-388000 event: Registered Node addons-388000 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000000] KASLR disabled due to lack of seed
	[  +0.645091] EINJ: EINJ table not found.
	[  +0.506039] systemd-fstab-generator[117]: Ignoring "noauto" for root device
	[  +0.043466] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000824] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +4.211816] systemd-fstab-generator[481]: Ignoring "noauto" for root device
	[  +0.087452] systemd-fstab-generator[492]: Ignoring "noauto" for root device
	[  +0.529791] systemd-fstab-generator[754]: Ignoring "noauto" for root device
	[  +0.178247] systemd-fstab-generator[791]: Ignoring "noauto" for root device
	[  +0.078699] systemd-fstab-generator[802]: Ignoring "noauto" for root device
	[  +0.082696] systemd-fstab-generator[815]: Ignoring "noauto" for root device
	[  +1.243164] systemd-fstab-generator[973]: Ignoring "noauto" for root device
	[  +0.079535] systemd-fstab-generator[984]: Ignoring "noauto" for root device
	[  +0.081510] systemd-fstab-generator[995]: Ignoring "noauto" for root device
	[  +0.082103] systemd-fstab-generator[1006]: Ignoring "noauto" for root device
	[  +0.084560] systemd-fstab-generator[1049]: Ignoring "noauto" for root device
	[  +2.579762] systemd-fstab-generator[1149]: Ignoring "noauto" for root device
	[  +2.146558] kauditd_printk_skb: 53 callbacks suppressed
	[  +2.177617] systemd-fstab-generator[1466]: Ignoring "noauto" for root device
	[  +5.135787] systemd-fstab-generator[2333]: Ignoring "noauto" for root device
	[Sep14 21:37] kauditd_printk_skb: 41 callbacks suppressed
	[  +1.224924] TCP: eth0: Driver has suspect GRO implementation, TCP performance may be compromised.
	[  +5.069347] kauditd_printk_skb: 47 callbacks suppressed
	[  +8.062309] kauditd_printk_skb: 6 callbacks suppressed
	[  +6.104989] kauditd_printk_skb: 5 callbacks suppressed
	
	* 
	* ==> etcd [5a63d0e8296f] <==
	* {"level":"info","ts":"2023-09-14T21:36:50.716133Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-14T21:36:51.510944Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 is starting a new election at term 1"}
	{"level":"info","ts":"2023-09-14T21:36:51.511016Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-09-14T21:36:51.51104Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 received MsgPreVoteResp from c46d288d2fcb0590 at term 1"}
	{"level":"info","ts":"2023-09-14T21:36:51.511075Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 became candidate at term 2"}
	{"level":"info","ts":"2023-09-14T21:36:51.511094Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 received MsgVoteResp from c46d288d2fcb0590 at term 2"}
	{"level":"info","ts":"2023-09-14T21:36:51.511114Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 became leader at term 2"}
	{"level":"info","ts":"2023-09-14T21:36:51.511137Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c46d288d2fcb0590 elected leader c46d288d2fcb0590 at term 2"}
	{"level":"info","ts":"2023-09-14T21:36:51.511827Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"c46d288d2fcb0590","local-member-attributes":"{Name:addons-388000 ClientURLs:[https://192.168.105.2:2379]}","request-path":"/0/members/c46d288d2fcb0590/attributes","cluster-id":"6e03e7863b4f9c54","publish-timeout":"7s"}
	{"level":"info","ts":"2023-09-14T21:36:51.511953Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-14T21:36:51.512015Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-14T21:36:51.512221Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-14T21:36:51.512251Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-09-14T21:36:51.512273Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6e03e7863b4f9c54","local-member-id":"c46d288d2fcb0590","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-14T21:36:51.512322Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-14T21:36:51.512345Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-14T21:36:51.512368Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-14T21:36:51.512743Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.2:2379"}
	{"level":"info","ts":"2023-09-14T21:36:51.513582Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-09-14T21:46:51.09248Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":801}
	{"level":"info","ts":"2023-09-14T21:46:51.094243Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":801,"took":"1.342212ms","hash":1083412012}
	{"level":"info","ts":"2023-09-14T21:46:51.094259Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1083412012,"revision":801,"compact-revision":-1}
	{"level":"info","ts":"2023-09-14T21:51:51.097381Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":951}
	{"level":"info","ts":"2023-09-14T21:51:51.097919Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":951,"took":"269.454µs","hash":1387439011}
	{"level":"info","ts":"2023-09-14T21:51:51.097932Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1387439011,"revision":951,"compact-revision":801}
	
	* 
	* ==> gcp-auth [726bdbe627b0] <==
	* 2023/09/14 21:37:25 GCP Auth Webhook started!
	
	* 
	* ==> kernel <==
	*  21:55:07 up 18 min,  0 users,  load average: 0.10, 0.19, 0.17
	Linux addons-388000 5.10.57 #1 SMP PREEMPT Wed Sep 13 19:05:16 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [39f78945ed57] <==
	* I0914 21:37:18.721175       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0914 21:37:18.723679       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1beta1.metrics.k8s.io": the object has been modified; please apply your changes to the latest version and try again
	I0914 21:37:18.728157       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0914 21:37:51.694747       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0914 21:38:51.695985       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0914 21:39:51.695955       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0914 21:40:51.695987       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0914 21:41:51.695403       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0914 21:41:51.759878       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0914 21:42:51.695830       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0914 21:43:51.695722       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0914 21:44:51.695063       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0914 21:45:51.695412       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0914 21:46:51.695845       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0914 21:46:51.765612       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0914 21:47:51.695437       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0914 21:48:51.695685       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0914 21:49:51.695503       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0914 21:50:51.694805       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0914 21:51:51.695866       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0914 21:51:51.770993       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0914 21:52:51.695289       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0914 21:53:00.844205       1 watcher.go:245] watch chan error: etcdserver: mvcc: required revision has been compacted
	I0914 21:53:51.695037       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0914 21:54:51.695833       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	
	* 
	* ==> kube-controller-manager [f2717f532e59] <==
	* I0914 21:37:14.521583       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0914 21:37:18.703329       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-7c66d45ddc" duration="5.130125ms"
	I0914 21:37:18.703353       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-7c66d45ddc" duration="13.667µs"
	I0914 21:37:21.717272       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0914 21:37:21.725039       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0914 21:37:22.734779       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0914 21:37:22.813793       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0914 21:37:23.747243       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0914 21:37:23.753327       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0914 21:37:23.816708       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0914 21:37:23.819180       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0914 21:37:23.821789       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0914 21:37:23.822117       1 event.go:307] "Event occurred" object="gcp-auth/gcp-auth-certs-create" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0914 21:37:23.822779       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0914 21:37:24.756088       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0914 21:37:24.759099       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0914 21:37:24.761716       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0914 21:37:24.762180       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0914 21:37:24.762196       1 event.go:307] "Event occurred" object="gcp-auth/gcp-auth-certs-patch" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0914 21:37:25.770929       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="gcp-auth/gcp-auth-d4c87556c" duration="2.310416ms"
	I0914 21:37:25.771746       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="gcp-auth/gcp-auth-d4c87556c" duration="11.791µs"
	I0914 21:37:53.005858       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0914 21:37:53.014644       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0914 21:37:54.003849       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0914 21:37:54.024141       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	
	* 
	* ==> kube-proxy [c36ca5fc7621] <==
	* I0914 21:37:08.522854       1 server_others.go:69] "Using iptables proxy"
	I0914 21:37:08.529066       1 node.go:141] Successfully retrieved node IP: 192.168.105.2
	I0914 21:37:08.587870       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0914 21:37:08.587883       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0914 21:37:08.588459       1 server_others.go:152] "Using iptables Proxier"
	I0914 21:37:08.588486       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0914 21:37:08.588572       1 server.go:846] "Version info" version="v1.28.1"
	I0914 21:37:08.588578       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 21:37:08.589296       1 config.go:188] "Starting service config controller"
	I0914 21:37:08.589305       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0914 21:37:08.589315       1 config.go:97] "Starting endpoint slice config controller"
	I0914 21:37:08.589317       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0914 21:37:08.589522       1 config.go:315] "Starting node config controller"
	I0914 21:37:08.589524       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0914 21:37:08.690794       1 shared_informer.go:318] Caches are synced for node config
	I0914 21:37:08.690821       1 shared_informer.go:318] Caches are synced for service config
	I0914 21:37:08.690838       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [af45960dc2d7] <==
	* E0914 21:36:52.199210       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0914 21:36:52.199206       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0914 21:36:52.199236       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0914 21:36:52.199258       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0914 21:36:52.199265       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0914 21:36:52.199278       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0914 21:36:52.199281       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0914 21:36:52.199189       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0914 21:36:52.199288       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0914 21:36:52.199260       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0914 21:36:52.199247       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0914 21:36:52.199323       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0914 21:36:53.095318       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0914 21:36:53.095337       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0914 21:36:53.142146       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0914 21:36:53.142164       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0914 21:36:53.158912       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0914 21:36:53.159021       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0914 21:36:53.162940       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0914 21:36:53.163031       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0914 21:36:53.206403       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0914 21:36:53.206481       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0914 21:36:53.209535       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0914 21:36:53.209549       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0914 21:36:53.797539       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Thu 2023-09-14 21:36:36 UTC, ends at Thu 2023-09-14 21:55:08 UTC. --
	Sep 14 21:49:54 addons-388000 kubelet[2339]: E0914 21:49:54.524223    2339 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 14 21:49:54 addons-388000 kubelet[2339]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 14 21:49:54 addons-388000 kubelet[2339]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 14 21:49:54 addons-388000 kubelet[2339]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 14 21:50:54 addons-388000 kubelet[2339]: E0914 21:50:54.525020    2339 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 14 21:50:54 addons-388000 kubelet[2339]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 14 21:50:54 addons-388000 kubelet[2339]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 14 21:50:54 addons-388000 kubelet[2339]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 14 21:51:54 addons-388000 kubelet[2339]: E0914 21:51:54.525199    2339 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 14 21:51:54 addons-388000 kubelet[2339]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 14 21:51:54 addons-388000 kubelet[2339]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 14 21:51:54 addons-388000 kubelet[2339]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 14 21:51:54 addons-388000 kubelet[2339]: W0914 21:51:54.547907    2339 machine.go:65] Cannot read vendor id correctly, set empty.
	Sep 14 21:52:54 addons-388000 kubelet[2339]: E0914 21:52:54.525011    2339 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 14 21:52:54 addons-388000 kubelet[2339]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 14 21:52:54 addons-388000 kubelet[2339]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 14 21:52:54 addons-388000 kubelet[2339]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 14 21:53:54 addons-388000 kubelet[2339]: E0914 21:53:54.524934    2339 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 14 21:53:54 addons-388000 kubelet[2339]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 14 21:53:54 addons-388000 kubelet[2339]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 14 21:53:54 addons-388000 kubelet[2339]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 14 21:54:54 addons-388000 kubelet[2339]: E0914 21:54:54.528203    2339 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 14 21:54:54 addons-388000 kubelet[2339]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 14 21:54:54 addons-388000 kubelet[2339]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 14 21:54:54 addons-388000 kubelet[2339]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p addons-388000 -n addons-388000
helpers_test.go:261: (dbg) Run:  kubectl --context addons-388000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Registry FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Registry (720.78s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (0.76s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:183: (dbg) Run:  kubectl --context addons-388000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:183: (dbg) Non-zero exit: kubectl --context addons-388000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: exit status 1 (35.328ms)

                                                
                                                
** stderr ** 
	error: no matching resources found

                                                
                                                
** /stderr **
addons_test.go:184: failed waiting for ingress-nginx-controller : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p addons-388000 -n addons-388000
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p addons-388000 logs -n 25
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-917000 | jenkins | v1.31.2 | 14 Sep 23 14:35 PDT |                     |
	|         | -p download-only-917000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| start   | -o=json --download-only        | download-only-917000 | jenkins | v1.31.2 | 14 Sep 23 14:35 PDT |                     |
	|         | -p download-only-917000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.31.2 | 14 Sep 23 14:36 PDT | 14 Sep 23 14:36 PDT |
	| delete  | -p download-only-917000        | download-only-917000 | jenkins | v1.31.2 | 14 Sep 23 14:36 PDT | 14 Sep 23 14:36 PDT |
	| delete  | -p download-only-917000        | download-only-917000 | jenkins | v1.31.2 | 14 Sep 23 14:36 PDT | 14 Sep 23 14:36 PDT |
	| start   | --download-only -p             | binary-mirror-231000 | jenkins | v1.31.2 | 14 Sep 23 14:36 PDT |                     |
	|         | binary-mirror-231000           |                      |         |         |                     |                     |
	|         | --alsologtostderr              |                      |         |         |                     |                     |
	|         | --binary-mirror                |                      |         |         |                     |                     |
	|         | http://127.0.0.1:49379         |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-231000        | binary-mirror-231000 | jenkins | v1.31.2 | 14 Sep 23 14:36 PDT | 14 Sep 23 14:36 PDT |
	| start   | -p addons-388000               | addons-388000        | jenkins | v1.31.2 | 14 Sep 23 14:36 PDT | 14 Sep 23 14:43 PDT |
	|         | --wait=true --memory=4000      |                      |         |         |                     |                     |
	|         | --alsologtostderr              |                      |         |         |                     |                     |
	|         | --addons=registry              |                      |         |         |                     |                     |
	|         | --addons=metrics-server        |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots       |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver   |                      |         |         |                     |                     |
	|         | --addons=gcp-auth              |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner         |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget      |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|         | --addons=ingress               |                      |         |         |                     |                     |
	|         | --addons=ingress-dns           |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p       | addons-388000        | jenkins | v1.31.2 | 14 Sep 23 14:55 PDT |                     |
	|         | addons-388000                  |                      |         |         |                     |                     |
	| addons  | addons-388000 addons           | addons-388000        | jenkins | v1.31.2 | 14 Sep 23 14:55 PDT | 14 Sep 23 14:55 PDT |
	|         | disable metrics-server         |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/14 14:36:23
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.21.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0914 14:36:23.572515    1522 out.go:296] Setting OutFile to fd 1 ...
	I0914 14:36:23.572636    1522 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 14:36:23.572639    1522 out.go:309] Setting ErrFile to fd 2...
	I0914 14:36:23.572642    1522 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 14:36:23.572752    1522 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17243-1006/.minikube/bin
	I0914 14:36:23.573756    1522 out.go:303] Setting JSON to false
	I0914 14:36:23.588610    1522 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":357,"bootTime":1694727026,"procs":391,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0914 14:36:23.588683    1522 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0914 14:36:23.593630    1522 out.go:177] * [addons-388000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0914 14:36:23.600459    1522 out.go:177]   - MINIKUBE_LOCATION=17243
	I0914 14:36:23.600497    1522 notify.go:220] Checking for updates...
	I0914 14:36:23.603591    1522 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17243-1006/kubeconfig
	I0914 14:36:23.606425    1522 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 14:36:23.609496    1522 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 14:36:23.612541    1522 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17243-1006/.minikube
	I0914 14:36:23.615423    1522 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 14:36:23.618648    1522 driver.go:373] Setting default libvirt URI to qemu:///system
	I0914 14:36:23.622479    1522 out.go:177] * Using the qemu2 driver based on user configuration
	I0914 14:36:23.629482    1522 start.go:298] selected driver: qemu2
	I0914 14:36:23.629487    1522 start.go:902] validating driver "qemu2" against <nil>
	I0914 14:36:23.629493    1522 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 14:36:23.631382    1522 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0914 14:36:23.634542    1522 out.go:177] * Automatically selected the socket_vmnet network
	I0914 14:36:23.637548    1522 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 14:36:23.637570    1522 cni.go:84] Creating CNI manager for ""
	I0914 14:36:23.637578    1522 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0914 14:36:23.637583    1522 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0914 14:36:23.637590    1522 start_flags.go:321] config:
	{Name:addons-388000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:addons-388000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID
:0 AutoPauseInterval:1m0s}
	I0914 14:36:23.641729    1522 iso.go:125] acquiring lock: {Name:mkd5b68b252c18146a825ff5365883948b3c6983 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 14:36:23.649492    1522 out.go:177] * Starting control plane node addons-388000 in cluster addons-388000
	I0914 14:36:23.653459    1522 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0914 14:36:23.653478    1522 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0914 14:36:23.653492    1522 cache.go:57] Caching tarball of preloaded images
	I0914 14:36:23.653557    1522 preload.go:174] Found /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0914 14:36:23.653564    1522 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0914 14:36:23.653811    1522 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/config.json ...
	I0914 14:36:23.653825    1522 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/config.json: {Name:mk9010c5dfb0ad4a966bb29118112217ba3b6cf7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 14:36:23.654041    1522 start.go:365] acquiring machines lock for addons-388000: {Name:mk831f2bd5a9a0c932c8a59972319d68edb53d07 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 14:36:23.654147    1522 start.go:369] acquired machines lock for "addons-388000" in 99.875µs
	I0914 14:36:23.654159    1522 start.go:93] Provisioning new machine with config: &{Name:addons-388000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.1 ClusterName:addons-388000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 14:36:23.654194    1522 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 14:36:23.662516    1522 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0914 14:36:23.982709    1522 start.go:159] libmachine.API.Create for "addons-388000" (driver="qemu2")
	I0914 14:36:23.982756    1522 client.go:168] LocalClient.Create starting
	I0914 14:36:23.982899    1522 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/ca.pem
	I0914 14:36:24.329911    1522 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/cert.pem
	I0914 14:36:24.425142    1522 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17243-1006/.minikube/cache/iso/arm64/minikube-v1.31.0-1694625400-17243-arm64.iso...
	I0914 14:36:24.784281    1522 main.go:141] libmachine: Creating SSH key...
	I0914 14:36:25.013863    1522 main.go:141] libmachine: Creating Disk image...
	I0914 14:36:25.013874    1522 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 14:36:25.014143    1522 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/disk.qcow2.raw /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/disk.qcow2
	I0914 14:36:25.048599    1522 main.go:141] libmachine: STDOUT: 
	I0914 14:36:25.048634    1522 main.go:141] libmachine: STDERR: 
	I0914 14:36:25.048701    1522 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/disk.qcow2 +20000M
	I0914 14:36:25.056105    1522 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 14:36:25.056122    1522 main.go:141] libmachine: STDERR: 
	I0914 14:36:25.056141    1522 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/disk.qcow2
	I0914 14:36:25.056150    1522 main.go:141] libmachine: Starting QEMU VM...
	I0914 14:36:25.056194    1522 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:ab:b1:c2:6f:25 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/disk.qcow2
	I0914 14:36:25.122275    1522 main.go:141] libmachine: STDOUT: 
	I0914 14:36:25.122322    1522 main.go:141] libmachine: STDERR: 
	I0914 14:36:25.122327    1522 main.go:141] libmachine: Attempt 0
	I0914 14:36:25.122346    1522 main.go:141] libmachine: Searching for fa:ab:b1:c2:6f:25 in /var/db/dhcpd_leases ...
	I0914 14:36:27.123500    1522 main.go:141] libmachine: Attempt 1
	I0914 14:36:27.123581    1522 main.go:141] libmachine: Searching for fa:ab:b1:c2:6f:25 in /var/db/dhcpd_leases ...
	I0914 14:36:29.124764    1522 main.go:141] libmachine: Attempt 2
	I0914 14:36:29.124788    1522 main.go:141] libmachine: Searching for fa:ab:b1:c2:6f:25 in /var/db/dhcpd_leases ...
	I0914 14:36:31.125900    1522 main.go:141] libmachine: Attempt 3
	I0914 14:36:31.125919    1522 main.go:141] libmachine: Searching for fa:ab:b1:c2:6f:25 in /var/db/dhcpd_leases ...
	I0914 14:36:33.126934    1522 main.go:141] libmachine: Attempt 4
	I0914 14:36:33.126945    1522 main.go:141] libmachine: Searching for fa:ab:b1:c2:6f:25 in /var/db/dhcpd_leases ...
	I0914 14:36:35.127988    1522 main.go:141] libmachine: Attempt 5
	I0914 14:36:35.128006    1522 main.go:141] libmachine: Searching for fa:ab:b1:c2:6f:25 in /var/db/dhcpd_leases ...
	I0914 14:36:37.130061    1522 main.go:141] libmachine: Attempt 6
	I0914 14:36:37.130089    1522 main.go:141] libmachine: Searching for fa:ab:b1:c2:6f:25 in /var/db/dhcpd_leases ...
	I0914 14:36:37.130226    1522 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0914 14:36:37.130272    1522 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:fa:ab:b1:c2:6f:25 ID:1,fa:ab:b1:c2:6f:25 Lease:0x6504ce64}
	I0914 14:36:37.130284    1522 main.go:141] libmachine: Found match: fa:ab:b1:c2:6f:25
	I0914 14:36:37.130296    1522 main.go:141] libmachine: IP: 192.168.105.2
	I0914 14:36:37.130304    1522 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.2)...
	I0914 14:36:39.152264    1522 machine.go:88] provisioning docker machine ...
	I0914 14:36:39.152328    1522 buildroot.go:166] provisioning hostname "addons-388000"
	I0914 14:36:39.153898    1522 main.go:141] libmachine: Using SSH client type: native
	I0914 14:36:39.154765    1522 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102894760] 0x102896ed0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0914 14:36:39.154789    1522 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-388000 && echo "addons-388000" | sudo tee /etc/hostname
	I0914 14:36:39.254406    1522 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-388000
	
	I0914 14:36:39.254547    1522 main.go:141] libmachine: Using SSH client type: native
	I0914 14:36:39.254974    1522 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102894760] 0x102896ed0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0914 14:36:39.254987    1522 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-388000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-388000/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-388000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 14:36:39.336783    1522 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 14:36:39.336807    1522 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17243-1006/.minikube CaCertPath:/Users/jenkins/minikube-integration/17243-1006/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17243-1006/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17243-1006/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17243-1006/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17243-1006/.minikube}
	I0914 14:36:39.336834    1522 buildroot.go:174] setting up certificates
	I0914 14:36:39.336842    1522 provision.go:83] configureAuth start
	I0914 14:36:39.336850    1522 provision.go:138] copyHostCerts
	I0914 14:36:39.337062    1522 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17243-1006/.minikube/ca.pem (1082 bytes)
	I0914 14:36:39.337458    1522 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17243-1006/.minikube/cert.pem (1123 bytes)
	I0914 14:36:39.337624    1522 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17243-1006/.minikube/key.pem (1675 bytes)
	I0914 14:36:39.337823    1522 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17243-1006/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17243-1006/.minikube/certs/ca-key.pem org=jenkins.addons-388000 san=[192.168.105.2 192.168.105.2 localhost 127.0.0.1 minikube addons-388000]
	I0914 14:36:39.438902    1522 provision.go:172] copyRemoteCerts
	I0914 14:36:39.438967    1522 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 14:36:39.438977    1522 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/id_rsa Username:docker}
	I0914 14:36:39.475382    1522 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0914 14:36:39.482935    1522 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0914 14:36:39.490611    1522 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0914 14:36:39.498058    1522 provision.go:86] duration metric: configureAuth took 161.21375ms
	I0914 14:36:39.498072    1522 buildroot.go:189] setting minikube options for container-runtime
	I0914 14:36:39.498194    1522 config.go:182] Loaded profile config "addons-388000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0914 14:36:39.498238    1522 main.go:141] libmachine: Using SSH client type: native
	I0914 14:36:39.498454    1522 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102894760] 0x102896ed0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0914 14:36:39.498461    1522 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0914 14:36:39.568371    1522 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0914 14:36:39.568380    1522 buildroot.go:70] root file system type: tmpfs
	I0914 14:36:39.568444    1522 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0914 14:36:39.568493    1522 main.go:141] libmachine: Using SSH client type: native
	I0914 14:36:39.568758    1522 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102894760] 0x102896ed0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0914 14:36:39.568795    1522 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0914 14:36:39.642658    1522 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0914 14:36:39.642714    1522 main.go:141] libmachine: Using SSH client type: native
	I0914 14:36:39.642984    1522 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102894760] 0x102896ed0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0914 14:36:39.642994    1522 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0914 14:36:40.018079    1522 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0914 14:36:40.018095    1522 machine.go:91] provisioned docker machine in 865.825208ms
	I0914 14:36:40.018101    1522 client.go:171] LocalClient.Create took 16.035747292s
	I0914 14:36:40.018112    1522 start.go:167] duration metric: libmachine.API.Create for "addons-388000" took 16.035815708s
	I0914 14:36:40.018117    1522 start.go:300] post-start starting for "addons-388000" (driver="qemu2")
	I0914 14:36:40.018121    1522 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 14:36:40.018186    1522 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 14:36:40.018197    1522 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/id_rsa Username:docker}
	I0914 14:36:40.056512    1522 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 14:36:40.057796    1522 info.go:137] Remote host: Buildroot 2021.02.12
	I0914 14:36:40.057807    1522 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17243-1006/.minikube/addons for local assets ...
	I0914 14:36:40.057875    1522 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17243-1006/.minikube/files for local assets ...
	I0914 14:36:40.057901    1522 start.go:303] post-start completed in 39.782666ms
	I0914 14:36:40.058218    1522 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/config.json ...
	I0914 14:36:40.058366    1522 start.go:128] duration metric: createHost completed in 16.404584042s
	I0914 14:36:40.058389    1522 main.go:141] libmachine: Using SSH client type: native
	I0914 14:36:40.058608    1522 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102894760] 0x102896ed0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0914 14:36:40.058612    1522 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0914 14:36:40.126242    1522 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694727400.596628044
	
	I0914 14:36:40.126252    1522 fix.go:206] guest clock: 1694727400.596628044
	I0914 14:36:40.126256    1522 fix.go:219] Guest: 2023-09-14 14:36:40.596628044 -0700 PDT Remote: 2023-09-14 14:36:40.058369 -0700 PDT m=+16.505601626 (delta=538.259044ms)
	I0914 14:36:40.126267    1522 fix.go:190] guest clock delta is within tolerance: 538.259044ms
	I0914 14:36:40.126272    1522 start.go:83] releasing machines lock for "addons-388000", held for 16.472537s
	I0914 14:36:40.126627    1522 ssh_runner.go:195] Run: cat /version.json
	I0914 14:36:40.126630    1522 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 14:36:40.126636    1522 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/id_rsa Username:docker}
	I0914 14:36:40.126680    1522 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/id_rsa Username:docker}
	I0914 14:36:40.164117    1522 ssh_runner.go:195] Run: systemctl --version
	I0914 14:36:40.279852    1522 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 14:36:40.282756    1522 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 14:36:40.282802    1522 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 14:36:40.290141    1522 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 14:36:40.290164    1522 start.go:469] detecting cgroup driver to use...
	I0914 14:36:40.290325    1522 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 14:36:40.298242    1522 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0914 14:36:40.302485    1522 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0914 14:36:40.306314    1522 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0914 14:36:40.306335    1522 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0914 14:36:40.309906    1522 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0914 14:36:40.313708    1522 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0914 14:36:40.317003    1522 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0914 14:36:40.319988    1522 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 14:36:40.323114    1522 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0914 14:36:40.326593    1522 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 14:36:40.329687    1522 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 14:36:40.332474    1522 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 14:36:40.414020    1522 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0914 14:36:40.421074    1522 start.go:469] detecting cgroup driver to use...
	I0914 14:36:40.421134    1522 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0914 14:36:40.426647    1522 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 14:36:40.431508    1522 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 14:36:40.437031    1522 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 14:36:40.441206    1522 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0914 14:36:40.445778    1522 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0914 14:36:40.494559    1522 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0914 14:36:40.500245    1522 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 14:36:40.506085    1522 ssh_runner.go:195] Run: which cri-dockerd
	I0914 14:36:40.507323    1522 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0914 14:36:40.510306    1522 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0914 14:36:40.515235    1522 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0914 14:36:40.590641    1522 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0914 14:36:40.670685    1522 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0914 14:36:40.670697    1522 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0914 14:36:40.676022    1522 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 14:36:40.753642    1522 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0914 14:36:41.915654    1522 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.162025209s)
	I0914 14:36:41.915719    1522 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0914 14:36:41.996165    1522 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0914 14:36:42.077673    1522 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0914 14:36:42.158787    1522 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 14:36:42.238393    1522 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0914 14:36:42.246223    1522 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 14:36:42.322653    1522 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0914 14:36:42.347035    1522 start.go:516] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0914 14:36:42.347147    1522 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0914 14:36:42.349276    1522 start.go:537] Will wait 60s for crictl version
	I0914 14:36:42.349310    1522 ssh_runner.go:195] Run: which crictl
	I0914 14:36:42.350645    1522 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 14:36:42.367912    1522 start.go:553] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.6
	RuntimeApiVersion:  v1alpha2
	I0914 14:36:42.367994    1522 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0914 14:36:42.377957    1522 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0914 14:36:42.394599    1522 out.go:204] * Preparing Kubernetes v1.28.1 on Docker 24.0.6 ...
	I0914 14:36:42.394744    1522 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0914 14:36:42.396150    1522 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 14:36:42.399678    1522 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0914 14:36:42.399720    1522 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0914 14:36:42.404754    1522 docker.go:636] Got preloaded images: 
	I0914 14:36:42.404761    1522 docker.go:642] registry.k8s.io/kube-apiserver:v1.28.1 wasn't preloaded
	I0914 14:36:42.404801    1522 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0914 14:36:42.407644    1522 ssh_runner.go:195] Run: which lz4
	I0914 14:36:42.408926    1522 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0914 14:36:42.410207    1522 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0914 14:36:42.410221    1522 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (356902558 bytes)
	I0914 14:36:43.758723    1522 docker.go:600] Took 1.349866 seconds to copy over tarball
	I0914 14:36:43.758788    1522 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0914 14:36:44.802481    1522 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.043706042s)
	I0914 14:36:44.802494    1522 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0914 14:36:44.818862    1522 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0914 14:36:44.822486    1522 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0914 14:36:44.827997    1522 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 14:36:44.904406    1522 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0914 14:36:47.070320    1522 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.165952375s)
	I0914 14:36:47.070426    1522 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0914 14:36:47.076673    1522 docker.go:636] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.1
	registry.k8s.io/kube-scheduler:v1.28.1
	registry.k8s.io/kube-proxy:v1.28.1
	registry.k8s.io/kube-controller-manager:v1.28.1
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0914 14:36:47.076684    1522 cache_images.go:84] Images are preloaded, skipping loading
	I0914 14:36:47.076750    1522 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0914 14:36:47.084410    1522 cni.go:84] Creating CNI manager for ""
	I0914 14:36:47.084420    1522 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0914 14:36:47.084443    1522 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0914 14:36:47.084452    1522 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.2 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-388000 NodeName:addons-388000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 14:36:47.084527    1522 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-388000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 14:36:47.084571    1522 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=addons-388000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:addons-388000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0914 14:36:47.084633    1522 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0914 14:36:47.087471    1522 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 14:36:47.087501    1522 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 14:36:47.090481    1522 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I0914 14:36:47.095702    1522 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 14:36:47.100584    1522 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2099 bytes)
	I0914 14:36:47.105532    1522 ssh_runner.go:195] Run: grep 192.168.105.2	control-plane.minikube.internal$ /etc/hosts
	I0914 14:36:47.106963    1522 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 14:36:47.110892    1522 certs.go:56] Setting up /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000 for IP: 192.168.105.2
	I0914 14:36:47.110903    1522 certs.go:190] acquiring lock for shared ca certs: {Name:mkd19d6e2143685b57ba1e0d43c4081bbdb26a79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 14:36:47.111053    1522 certs.go:204] generating minikubeCA CA: /Users/jenkins/minikube-integration/17243-1006/.minikube/ca.key
	I0914 14:36:47.228830    1522 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17243-1006/.minikube/ca.crt ...
	I0914 14:36:47.228840    1522 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17243-1006/.minikube/ca.crt: {Name:mk1c10f9290e336c983838c8c09bb8cd18a9a4c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 14:36:47.229095    1522 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17243-1006/.minikube/ca.key ...
	I0914 14:36:47.229099    1522 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17243-1006/.minikube/ca.key: {Name:mkbc669c78b9b93a07aa566669e7e92430fec9e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 14:36:47.229219    1522 certs.go:204] generating proxyClientCA CA: /Users/jenkins/minikube-integration/17243-1006/.minikube/proxy-client-ca.key
	I0914 14:36:47.333428    1522 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17243-1006/.minikube/proxy-client-ca.crt ...
	I0914 14:36:47.333432    1522 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17243-1006/.minikube/proxy-client-ca.crt: {Name:mk85d65dc023d08a0f4cb19cc395e69f12c9ed1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 14:36:47.333577    1522 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17243-1006/.minikube/proxy-client-ca.key ...
	I0914 14:36:47.333579    1522 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17243-1006/.minikube/proxy-client-ca.key: {Name:mk62bc08bafeee956e88b9480bac37c2df91bf30 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 14:36:47.333721    1522 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/client.key
	I0914 14:36:47.333730    1522 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/client.crt with IP's: []
	I0914 14:36:47.598337    1522 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/client.crt ...
	I0914 14:36:47.598352    1522 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/client.crt: {Name:mk8ecd4e838807718c7ef97bafd599d3b7fd1a4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 14:36:47.598702    1522 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/client.key ...
	I0914 14:36:47.598710    1522 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/client.key: {Name:mk3960bc5fb536243466f07f9f23680cfa92d826 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 14:36:47.598826    1522 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/apiserver.key.96055969
	I0914 14:36:47.598838    1522 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/apiserver.crt.96055969 with IP's: [192.168.105.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0914 14:36:47.656638    1522 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/apiserver.crt.96055969 ...
	I0914 14:36:47.656642    1522 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/apiserver.crt.96055969: {Name:mk3691ba24392ca70b8d7adb6c837bd5b52dfeeb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 14:36:47.656789    1522 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/apiserver.key.96055969 ...
	I0914 14:36:47.656792    1522 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/apiserver.key.96055969: {Name:mk7619af569a08784491e3a0055c754ead430eca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 14:36:47.656913    1522 certs.go:337] copying /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/apiserver.crt.96055969 -> /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/apiserver.crt
	I0914 14:36:47.657047    1522 certs.go:341] copying /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/apiserver.key.96055969 -> /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/apiserver.key
	I0914 14:36:47.657134    1522 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/proxy-client.key
	I0914 14:36:47.657146    1522 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/proxy-client.crt with IP's: []
	I0914 14:36:47.715161    1522 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/proxy-client.crt ...
	I0914 14:36:47.715165    1522 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/proxy-client.crt: {Name:mk5c5221c842b768f8e9ba880dc08acd610bf8dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 14:36:47.715298    1522 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/proxy-client.key ...
	I0914 14:36:47.715301    1522 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/proxy-client.key: {Name:mk620ca3f197a51ffd017e6711b4bab26fb15d79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 14:36:47.715560    1522 certs.go:437] found cert: /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/Users/jenkins/minikube-integration/17243-1006/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 14:36:47.715594    1522 certs.go:437] found cert: /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/Users/jenkins/minikube-integration/17243-1006/.minikube/certs/ca.pem (1082 bytes)
	I0914 14:36:47.715621    1522 certs.go:437] found cert: /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/Users/jenkins/minikube-integration/17243-1006/.minikube/certs/cert.pem (1123 bytes)
	I0914 14:36:47.715645    1522 certs.go:437] found cert: /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/Users/jenkins/minikube-integration/17243-1006/.minikube/certs/key.pem (1675 bytes)
	I0914 14:36:47.716027    1522 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0914 14:36:47.723894    1522 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0914 14:36:47.731037    1522 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 14:36:47.738379    1522 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0914 14:36:47.745927    1522 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17243-1006/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 14:36:47.752925    1522 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17243-1006/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0914 14:36:47.759542    1522 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17243-1006/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 14:36:47.766602    1522 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17243-1006/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0914 14:36:47.773763    1522 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17243-1006/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 14:36:47.780697    1522 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 14:36:47.786484    1522 ssh_runner.go:195] Run: openssl version
	I0914 14:36:47.788649    1522 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 14:36:47.791615    1522 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 14:36:47.793075    1522 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 14 21:36 /usr/share/ca-certificates/minikubeCA.pem
	I0914 14:36:47.793092    1522 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 14:36:47.794978    1522 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 14:36:47.798423    1522 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0914 14:36:47.799931    1522 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0914 14:36:47.799971    1522 kubeadm.go:404] StartCluster: {Name:addons-388000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.1 ClusterName:addons-388000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 14:36:47.800034    1522 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0914 14:36:47.805504    1522 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 14:36:47.808480    1522 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 14:36:47.811111    1522 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 14:36:47.814398    1522 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 14:36:47.814412    1522 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0914 14:36:47.835210    1522 kubeadm.go:322] [init] Using Kubernetes version: v1.28.1
	I0914 14:36:47.835254    1522 kubeadm.go:322] [preflight] Running pre-flight checks
	I0914 14:36:47.889698    1522 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 14:36:47.889750    1522 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 14:36:47.889794    1522 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0914 14:36:47.952261    1522 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 14:36:47.962464    1522 out.go:204]   - Generating certificates and keys ...
	I0914 14:36:47.962497    1522 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0914 14:36:47.962525    1522 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0914 14:36:48.025951    1522 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0914 14:36:48.134925    1522 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0914 14:36:48.186988    1522 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0914 14:36:48.299178    1522 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0914 14:36:48.429498    1522 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0914 14:36:48.429557    1522 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-388000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0914 14:36:48.510620    1522 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0914 14:36:48.510686    1522 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-388000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0914 14:36:48.631510    1522 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0914 14:36:48.668002    1522 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0914 14:36:48.726941    1522 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0914 14:36:48.726969    1522 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 14:36:48.823035    1522 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 14:36:48.918005    1522 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 14:36:49.052610    1522 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 14:36:49.136045    1522 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 14:36:49.136292    1522 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 14:36:49.138218    1522 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 14:36:49.141449    1522 out.go:204]   - Booting up control plane ...
	I0914 14:36:49.141518    1522 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 14:36:49.141563    1522 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 14:36:49.141596    1522 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 14:36:49.146098    1522 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 14:36:49.146527    1522 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 14:36:49.146584    1522 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0914 14:36:49.235726    1522 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0914 14:36:53.234480    1522 kubeadm.go:322] [apiclient] All control plane components are healthy after 4.002199 seconds
	I0914 14:36:53.234548    1522 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0914 14:36:53.240692    1522 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0914 14:36:53.748795    1522 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0914 14:36:53.748894    1522 kubeadm.go:322] [mark-control-plane] Marking the node addons-388000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0914 14:36:54.253997    1522 kubeadm.go:322] [bootstrap-token] Using token: v43sey.bixdamecwwaf1quf
	I0914 14:36:54.261418    1522 out.go:204]   - Configuring RBAC rules ...
	I0914 14:36:54.261475    1522 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0914 14:36:54.262616    1522 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0914 14:36:54.269041    1522 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0914 14:36:54.270041    1522 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0914 14:36:54.271028    1522 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0914 14:36:54.272209    1522 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0914 14:36:54.276273    1522 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0914 14:36:54.432396    1522 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0914 14:36:54.665469    1522 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0914 14:36:54.665894    1522 kubeadm.go:322] 
	I0914 14:36:54.665937    1522 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0914 14:36:54.665940    1522 kubeadm.go:322] 
	I0914 14:36:54.665992    1522 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0914 14:36:54.665996    1522 kubeadm.go:322] 
	I0914 14:36:54.666008    1522 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0914 14:36:54.666036    1522 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0914 14:36:54.666071    1522 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0914 14:36:54.666074    1522 kubeadm.go:322] 
	I0914 14:36:54.666099    1522 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0914 14:36:54.666101    1522 kubeadm.go:322] 
	I0914 14:36:54.666123    1522 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0914 14:36:54.666126    1522 kubeadm.go:322] 
	I0914 14:36:54.666148    1522 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0914 14:36:54.666182    1522 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0914 14:36:54.666217    1522 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0914 14:36:54.666220    1522 kubeadm.go:322] 
	I0914 14:36:54.666261    1522 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0914 14:36:54.666306    1522 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0914 14:36:54.666308    1522 kubeadm.go:322] 
	I0914 14:36:54.666396    1522 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token v43sey.bixdamecwwaf1quf \
	I0914 14:36:54.666457    1522 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:64a56fc8b6798e154cd69d815313de8f6cddb906dc6093ac20ba940ac7c1d871 \
	I0914 14:36:54.666472    1522 kubeadm.go:322] 	--control-plane 
	I0914 14:36:54.666475    1522 kubeadm.go:322] 
	I0914 14:36:54.666513    1522 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0914 14:36:54.666517    1522 kubeadm.go:322] 
	I0914 14:36:54.666553    1522 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token v43sey.bixdamecwwaf1quf \
	I0914 14:36:54.666621    1522 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:64a56fc8b6798e154cd69d815313de8f6cddb906dc6093ac20ba940ac7c1d871 
	I0914 14:36:54.666672    1522 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 14:36:54.666677    1522 cni.go:84] Creating CNI manager for ""
	I0914 14:36:54.666685    1522 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0914 14:36:54.674398    1522 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 14:36:54.677531    1522 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 14:36:54.681843    1522 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0914 14:36:54.686762    1522 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 14:36:54.686820    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:36:54.686837    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=d7492f2ae2d9b6e62b385ffcd97ebad62c645e82 minikube.k8s.io/name=addons-388000 minikube.k8s.io/updated_at=2023_09_14T14_36_54_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:36:54.745761    1522 ops.go:34] apiserver oom_adj: -16
	I0914 14:36:54.751811    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:36:54.783862    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:36:55.319135    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:36:55.819146    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:36:56.319044    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:36:56.817396    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:36:57.317676    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:36:57.819036    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:36:58.319007    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:36:58.819025    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:36:59.318963    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:36:59.819032    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:37:00.318959    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:37:00.819007    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:37:01.318925    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:37:01.819004    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:37:02.318900    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:37:02.818938    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:37:03.318896    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:37:03.818843    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:37:04.318914    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:37:04.818824    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:37:05.318789    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:37:05.818890    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:37:06.318784    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:37:06.818791    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:37:07.318787    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:37:07.357143    1522 kubeadm.go:1081] duration metric: took 12.670689708s to wait for elevateKubeSystemPrivileges.
	I0914 14:37:07.357158    1522 kubeadm.go:406] StartCluster complete in 19.557685291s
	I0914 14:37:07.357184    1522 settings.go:142] acquiring lock: {Name:mkcccc97e247e7e1b2e556ccc64336c05a92af53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 14:37:07.357360    1522 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/17243-1006/kubeconfig
	I0914 14:37:07.357606    1522 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17243-1006/kubeconfig: {Name:mkeec13fc5a79792669e9cedabfbe21efeb27d3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 14:37:07.357803    1522 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0914 14:37:07.357856    1522 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:true]
	I0914 14:37:07.357902    1522 addons.go:69] Setting volumesnapshots=true in profile "addons-388000"
	I0914 14:37:07.357909    1522 addons.go:231] Setting addon volumesnapshots=true in "addons-388000"
	I0914 14:37:07.357912    1522 addons.go:69] Setting ingress=true in profile "addons-388000"
	I0914 14:37:07.357919    1522 addons.go:231] Setting addon ingress=true in "addons-388000"
	I0914 14:37:07.357926    1522 host.go:66] Checking if "addons-388000" exists ...
	I0914 14:37:07.357934    1522 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-388000"
	I0914 14:37:07.357942    1522 host.go:66] Checking if "addons-388000" exists ...
	I0914 14:37:07.357951    1522 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-388000"
	I0914 14:37:07.357967    1522 host.go:66] Checking if "addons-388000" exists ...
	I0914 14:37:07.357975    1522 addons.go:69] Setting ingress-dns=true in profile "addons-388000"
	I0914 14:37:07.357985    1522 addons.go:69] Setting metrics-server=true in profile "addons-388000"
	I0914 14:37:07.358004    1522 addons.go:231] Setting addon ingress-dns=true in "addons-388000"
	I0914 14:37:07.358008    1522 addons.go:231] Setting addon metrics-server=true in "addons-388000"
	I0914 14:37:07.358046    1522 host.go:66] Checking if "addons-388000" exists ...
	I0914 14:37:07.358051    1522 host.go:66] Checking if "addons-388000" exists ...
	I0914 14:37:07.358066    1522 addons.go:69] Setting inspektor-gadget=true in profile "addons-388000"
	I0914 14:37:07.358074    1522 addons.go:231] Setting addon inspektor-gadget=true in "addons-388000"
	I0914 14:37:07.358086    1522 host.go:66] Checking if "addons-388000" exists ...
	I0914 14:37:07.358133    1522 config.go:182] Loaded profile config "addons-388000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0914 14:37:07.358210    1522 addons.go:69] Setting registry=true in profile "addons-388000"
	I0914 14:37:07.358222    1522 addons.go:231] Setting addon registry=true in "addons-388000"
	I0914 14:37:07.358259    1522 addons.go:69] Setting cloud-spanner=true in profile "addons-388000"
	I0914 14:37:07.358263    1522 addons.go:69] Setting default-storageclass=true in profile "addons-388000"
	I0914 14:37:07.358265    1522 addons.go:231] Setting addon cloud-spanner=true in "addons-388000"
	I0914 14:37:07.358266    1522 host.go:66] Checking if "addons-388000" exists ...
	I0914 14:37:07.358273    1522 addons.go:69] Setting storage-provisioner=true in profile "addons-388000"
	I0914 14:37:07.358276    1522 host.go:66] Checking if "addons-388000" exists ...
	I0914 14:37:07.358278    1522 addons.go:231] Setting addon storage-provisioner=true in "addons-388000"
	I0914 14:37:07.358289    1522 host.go:66] Checking if "addons-388000" exists ...
	W0914 14:37:07.358332    1522 host.go:54] host status for "addons-388000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/monitor: connect: connection refused
	W0914 14:37:07.358339    1522 addons.go:277] "addons-388000" is not running, setting inspektor-gadget=true and skipping enablement (err=<nil>)
	W0914 14:37:07.358450    1522 host.go:54] host status for "addons-388000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/monitor: connect: connection refused
	W0914 14:37:07.358453    1522 addons.go:277] "addons-388000" is not running, setting volumesnapshots=true and skipping enablement (err=<nil>)
	W0914 14:37:07.358483    1522 host.go:54] host status for "addons-388000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/monitor: connect: connection refused
	W0914 14:37:07.358489    1522 addons.go:277] "addons-388000" is not running, setting cloud-spanner=true and skipping enablement (err=<nil>)
	W0914 14:37:07.358257    1522 host.go:54] host status for "addons-388000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/monitor: connect: connection refused
	W0914 14:37:07.358494    1522 addons.go:277] "addons-388000" is not running, setting ingress=true and skipping enablement (err=<nil>)
	I0914 14:37:07.358496    1522 addons.go:467] Verifying addon ingress=true in "addons-388000"
	W0914 14:37:07.358500    1522 host.go:54] host status for "addons-388000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/monitor: connect: connection refused
	W0914 14:37:07.358504    1522 addons.go:277] "addons-388000" is not running, setting storage-provisioner=true and skipping enablement (err=<nil>)
	I0914 14:37:07.363429    1522 out.go:177] * Verifying ingress addon...
	I0914 14:37:07.358269    1522 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-388000"
	W0914 14:37:07.358528    1522 host.go:54] host status for "addons-388000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/monitor: connect: connection refused
	I0914 14:37:07.358271    1522 addons.go:69] Setting gcp-auth=true in profile "addons-388000"
	W0914 14:37:07.358722    1522 host.go:54] host status for "addons-388000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/monitor: connect: connection refused
	W0914 14:37:07.370487    1522 addons.go:277] "addons-388000" is not running, setting ingress-dns=true and skipping enablement (err=<nil>)
	I0914 14:37:07.370528    1522 mustload.go:65] Loading cluster: addons-388000
	W0914 14:37:07.370533    1522 addons.go:277] "addons-388000" is not running, setting registry=true and skipping enablement (err=<nil>)
	I0914 14:37:07.370877    1522 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0914 14:37:07.371899    1522 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-388000" context rescaled to 1 replicas
	I0914 14:37:07.372685    1522 addons.go:231] Setting addon default-storageclass=true in "addons-388000"
	I0914 14:37:07.374445    1522 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0914 14:37:07.377503    1522 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I0914 14:37:07.377530    1522 addons.go:467] Verifying addon registry=true in "addons-388000"
	I0914 14:37:07.377544    1522 host.go:66] Checking if "addons-388000" exists ...
	I0914 14:37:07.377592    1522 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 14:37:07.377611    1522 config.go:182] Loaded profile config "addons-388000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0914 14:37:07.379668    1522 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0914 14:37:07.387418    1522 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0914 14:37:07.384502    1522 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0914 14:37:07.385215    1522 host.go:66] Checking if "addons-388000" exists ...
	I0914 14:37:07.385566    1522 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 14:37:07.399476    1522 out.go:177] * Verifying Kubernetes components...
	I0914 14:37:07.399484    1522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0914 14:37:07.405519    1522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 14:37:07.405539    1522 out.go:177] * Verifying registry addon...
	I0914 14:37:07.409300    1522 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0914 14:37:07.413413    1522 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0914 14:37:07.409310    1522 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/id_rsa Username:docker}
	I0914 14:37:07.409318    1522 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/id_rsa Username:docker}
	I0914 14:37:07.413772    1522 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0914 14:37:07.421473    1522 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0914 14:37:07.425266    1522 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 14:37:07.434436    1522 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0914 14:37:07.437375    1522 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=registry
	I0914 14:37:07.438456    1522 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0914 14:37:07.450436    1522 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0914 14:37:07.460462    1522 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0914 14:37:07.463476    1522 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0914 14:37:07.463485    1522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0914 14:37:07.463494    1522 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/id_rsa Username:docker}
	I0914 14:37:07.497507    1522 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0914 14:37:07.497516    1522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0914 14:37:07.503780    1522 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0914 14:37:07.503787    1522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0914 14:37:07.509075    1522 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 14:37:07.509081    1522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0914 14:37:07.516870    1522 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 14:37:07.522898    1522 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 14:37:07.539508    1522 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0914 14:37:07.539521    1522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0914 14:37:07.591865    1522 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0914 14:37:07.591879    1522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0914 14:37:07.635732    1522 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0914 14:37:07.635742    1522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0914 14:37:07.644322    1522 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0914 14:37:07.644333    1522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0914 14:37:07.649557    1522 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0914 14:37:07.649568    1522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0914 14:37:07.681313    1522 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0914 14:37:07.681325    1522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0914 14:37:07.685931    1522 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0914 14:37:07.685936    1522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0914 14:37:07.690914    1522 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0914 14:37:07.690921    1522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0914 14:37:07.695920    1522 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0914 14:37:07.695926    1522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0914 14:37:07.700851    1522 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0914 14:37:07.700856    1522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0914 14:37:07.705677    1522 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0914 14:37:08.213892    1522 start.go:917] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I0914 14:37:08.214323    1522 node_ready.go:35] waiting up to 6m0s for node "addons-388000" to be "Ready" ...
	I0914 14:37:08.215929    1522 node_ready.go:49] node "addons-388000" has status "Ready":"True"
	I0914 14:37:08.215948    1522 node_ready.go:38] duration metric: took 1.599458ms waiting for node "addons-388000" to be "Ready" ...
	I0914 14:37:08.215953    1522 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 14:37:08.218780    1522 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-6php8" in "kube-system" namespace to be "Ready" ...
	I0914 14:37:08.378405    1522 addons.go:467] Verifying addon metrics-server=true in "addons-388000"
	I0914 14:37:08.878056    1522 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.172383083s)
	I0914 14:37:08.878074    1522 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-388000"
	I0914 14:37:08.882346    1522 out.go:177] * Verifying csi-hostpath-driver addon...
	I0914 14:37:08.892719    1522 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0914 14:37:08.895508    1522 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0914 14:37:08.895515    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:08.901644    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:09.404389    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:09.734233    1522 pod_ready.go:97] error getting pod "coredns-5dd5756b68-6php8" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-6php8" not found
	I0914 14:37:09.734244    1522 pod_ready.go:81] duration metric: took 1.515495542s waiting for pod "coredns-5dd5756b68-6php8" in "kube-system" namespace to be "Ready" ...
	E0914 14:37:09.734250    1522 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-6php8" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-6php8" not found
	I0914 14:37:09.734253    1522 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-psn28" in "kube-system" namespace to be "Ready" ...
	I0914 14:37:09.736576    1522 pod_ready.go:92] pod "coredns-5dd5756b68-psn28" in "kube-system" namespace has status "Ready":"True"
	I0914 14:37:09.736583    1522 pod_ready.go:81] duration metric: took 2.327542ms waiting for pod "coredns-5dd5756b68-psn28" in "kube-system" namespace to be "Ready" ...
	I0914 14:37:09.736588    1522 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-388000" in "kube-system" namespace to be "Ready" ...
	I0914 14:37:09.739033    1522 pod_ready.go:92] pod "etcd-addons-388000" in "kube-system" namespace has status "Ready":"True"
	I0914 14:37:09.739038    1522 pod_ready.go:81] duration metric: took 2.447792ms waiting for pod "etcd-addons-388000" in "kube-system" namespace to be "Ready" ...
	I0914 14:37:09.739041    1522 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-388000" in "kube-system" namespace to be "Ready" ...
	I0914 14:37:09.741595    1522 pod_ready.go:92] pod "kube-apiserver-addons-388000" in "kube-system" namespace has status "Ready":"True"
	I0914 14:37:09.741601    1522 pod_ready.go:81] duration metric: took 2.556083ms waiting for pod "kube-apiserver-addons-388000" in "kube-system" namespace to be "Ready" ...
	I0914 14:37:09.741605    1522 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-388000" in "kube-system" namespace to be "Ready" ...
	I0914 14:37:09.904411    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:10.016583    1522 pod_ready.go:92] pod "kube-controller-manager-addons-388000" in "kube-system" namespace has status "Ready":"True"
	I0914 14:37:10.016591    1522 pod_ready.go:81] duration metric: took 274.98975ms waiting for pod "kube-controller-manager-addons-388000" in "kube-system" namespace to be "Ready" ...
	I0914 14:37:10.016595    1522 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8pbsf" in "kube-system" namespace to be "Ready" ...
	I0914 14:37:10.404994    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:10.417030    1522 pod_ready.go:92] pod "kube-proxy-8pbsf" in "kube-system" namespace has status "Ready":"True"
	I0914 14:37:10.417036    1522 pod_ready.go:81] duration metric: took 400.447833ms waiting for pod "kube-proxy-8pbsf" in "kube-system" namespace to be "Ready" ...
	I0914 14:37:10.417041    1522 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-388000" in "kube-system" namespace to be "Ready" ...
	I0914 14:37:10.816814    1522 pod_ready.go:92] pod "kube-scheduler-addons-388000" in "kube-system" namespace has status "Ready":"True"
	I0914 14:37:10.816823    1522 pod_ready.go:81] duration metric: took 399.789417ms waiting for pod "kube-scheduler-addons-388000" in "kube-system" namespace to be "Ready" ...
	I0914 14:37:10.816827    1522 pod_ready.go:38] duration metric: took 2.600935083s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 14:37:10.816835    1522 api_server.go:52] waiting for apiserver process to appear ...
	I0914 14:37:10.816886    1522 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 14:37:10.821727    1522 api_server.go:72] duration metric: took 3.437324417s to wait for apiserver process to appear ...
	I0914 14:37:10.821733    1522 api_server.go:88] waiting for apiserver healthz status ...
	I0914 14:37:10.821738    1522 api_server.go:253] Checking apiserver healthz at https://192.168.105.2:8443/healthz ...
	I0914 14:37:10.825342    1522 api_server.go:279] https://192.168.105.2:8443/healthz returned 200:
	ok
	I0914 14:37:10.826107    1522 api_server.go:141] control plane version: v1.28.1
	I0914 14:37:10.826114    1522 api_server.go:131] duration metric: took 4.378333ms to wait for apiserver health ...
	I0914 14:37:10.826117    1522 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 14:37:10.904363    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:11.018876    1522 system_pods.go:59] 10 kube-system pods found
	I0914 14:37:11.018886    1522 system_pods.go:61] "coredns-5dd5756b68-psn28" [50c0e128-9a93-456c-83af-dfbcda64eaa4] Running
	I0914 14:37:11.018891    1522 system_pods.go:61] "csi-hostpath-attacher-0" [29be2dba-12b9-4442-8c83-8d24fd054a90] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0914 14:37:11.018894    1522 system_pods.go:61] "csi-hostpath-resizer-0" [11fcc7a2-d176-442f-9cd6-04668da8d423] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0914 14:37:11.018898    1522 system_pods.go:61] "csi-hostpathplugin-b5k2m" [aa03259b-6f1a-4537-95f6-47e8cf8fcc39] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0914 14:37:11.018909    1522 system_pods.go:61] "etcd-addons-388000" [b91a9e99-fb10-4340-977e-536225df8415] Running
	I0914 14:37:11.018914    1522 system_pods.go:61] "kube-apiserver-addons-388000" [43fed39d-32f3-4b45-b43c-d9918758a66c] Running
	I0914 14:37:11.018917    1522 system_pods.go:61] "kube-controller-manager-addons-388000" [31eb0c68-03ca-4907-921b-14ccef970edf] Running
	I0914 14:37:11.018920    1522 system_pods.go:61] "kube-proxy-8pbsf" [e9d3ab50-7594-4360-8226-d37e954aca6e] Running
	I0914 14:37:11.018923    1522 system_pods.go:61] "kube-scheduler-addons-388000" [d931a34d-1c14-4544-80cd-ce847a1f1af8] Running
	I0914 14:37:11.018927    1522 system_pods.go:61] "metrics-server-7c66d45ddc-dtj78" [7b539063-f45b-4a15-97e7-6713ea57e519] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 14:37:11.018932    1522 system_pods.go:74] duration metric: took 192.817125ms to wait for pod list to return data ...
	I0914 14:37:11.018935    1522 default_sa.go:34] waiting for default service account to be created ...
	I0914 14:37:11.216117    1522 default_sa.go:45] found service account: "default"
	I0914 14:37:11.216127    1522 default_sa.go:55] duration metric: took 197.1925ms for default service account to be created ...
	I0914 14:37:11.216130    1522 system_pods.go:116] waiting for k8s-apps to be running ...
	I0914 14:37:11.404125    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:11.419144    1522 system_pods.go:86] 10 kube-system pods found
	I0914 14:37:11.419151    1522 system_pods.go:89] "coredns-5dd5756b68-psn28" [50c0e128-9a93-456c-83af-dfbcda64eaa4] Running
	I0914 14:37:11.419155    1522 system_pods.go:89] "csi-hostpath-attacher-0" [29be2dba-12b9-4442-8c83-8d24fd054a90] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0914 14:37:11.419158    1522 system_pods.go:89] "csi-hostpath-resizer-0" [11fcc7a2-d176-442f-9cd6-04668da8d423] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0914 14:37:11.419163    1522 system_pods.go:89] "csi-hostpathplugin-b5k2m" [aa03259b-6f1a-4537-95f6-47e8cf8fcc39] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0914 14:37:11.419167    1522 system_pods.go:89] "etcd-addons-388000" [b91a9e99-fb10-4340-977e-536225df8415] Running
	I0914 14:37:11.419169    1522 system_pods.go:89] "kube-apiserver-addons-388000" [43fed39d-32f3-4b45-b43c-d9918758a66c] Running
	I0914 14:37:11.419176    1522 system_pods.go:89] "kube-controller-manager-addons-388000" [31eb0c68-03ca-4907-921b-14ccef970edf] Running
	I0914 14:37:11.419178    1522 system_pods.go:89] "kube-proxy-8pbsf" [e9d3ab50-7594-4360-8226-d37e954aca6e] Running
	I0914 14:37:11.419180    1522 system_pods.go:89] "kube-scheduler-addons-388000" [d931a34d-1c14-4544-80cd-ce847a1f1af8] Running
	I0914 14:37:11.419183    1522 system_pods.go:89] "metrics-server-7c66d45ddc-dtj78" [7b539063-f45b-4a15-97e7-6713ea57e519] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 14:37:11.419189    1522 system_pods.go:126] duration metric: took 203.059ms to wait for k8s-apps to be running ...
	I0914 14:37:11.419193    1522 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 14:37:11.419242    1522 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 14:37:11.424702    1522 system_svc.go:56] duration metric: took 5.506625ms WaitForService to wait for kubelet.
	I0914 14:37:11.424708    1522 kubeadm.go:581] duration metric: took 4.040322208s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0914 14:37:11.424718    1522 node_conditions.go:102] verifying NodePressure condition ...
	I0914 14:37:11.616510    1522 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0914 14:37:11.616524    1522 node_conditions.go:123] node cpu capacity is 2
	I0914 14:37:11.616531    1522 node_conditions.go:105] duration metric: took 191.81375ms to run NodePressure ...
	I0914 14:37:11.616536    1522 start.go:228] waiting for startup goroutines ...
	I0914 14:37:11.904062    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:12.404356    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:12.904283    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:13.404719    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:13.905195    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:14.010940    1522 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0914 14:37:14.010958    1522 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/id_rsa Username:docker}
	I0914 14:37:14.050416    1522 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0914 14:37:14.056158    1522 addons.go:231] Setting addon gcp-auth=true in "addons-388000"
	I0914 14:37:14.056180    1522 host.go:66] Checking if "addons-388000" exists ...
	I0914 14:37:14.056914    1522 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0914 14:37:14.056921    1522 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/id_rsa Username:docker}
	I0914 14:37:14.098984    1522 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0914 14:37:14.102963    1522 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0914 14:37:14.106843    1522 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0914 14:37:14.106851    1522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0914 14:37:14.112250    1522 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0914 14:37:14.112259    1522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0914 14:37:14.117057    1522 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0914 14:37:14.117063    1522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5412 bytes)
	I0914 14:37:14.122524    1522 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0914 14:37:14.407542    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:14.453711    1522 addons.go:467] Verifying addon gcp-auth=true in "addons-388000"
	I0914 14:37:14.458827    1522 out.go:177] * Verifying gcp-auth addon...
	I0914 14:37:14.469206    1522 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0914 14:37:14.473873    1522 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0914 14:37:14.473883    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:14.477552    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:14.905449    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:14.981028    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:15.404241    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:15.481017    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:15.904406    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:15.981050    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:16.404161    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:16.481356    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:16.904348    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:16.980852    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:17.404432    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:17.480937    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:17.904061    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:17.980969    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:18.404491    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:18.481031    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:18.904020    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:18.981054    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:19.405323    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:19.480019    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:19.904276    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:19.980839    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:20.404204    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:20.481250    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:20.904037    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:20.981407    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:21.404239    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:21.481248    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:21.904261    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:21.981109    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:22.405094    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:22.481049    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:22.904407    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:22.981227    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:23.404066    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:23.480779    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:23.904000    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:23.980955    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:24.404182    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:24.480903    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:24.904034    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:24.980896    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:25.403993    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:25.480949    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:25.903717    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:25.981591    1522 kapi.go:107] duration metric: took 11.512675166s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0914 14:37:25.985811    1522 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-388000 cluster.
	I0914 14:37:25.990747    1522 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0914 14:37:25.993661    1522 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0914 14:37:26.404089    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:26.904132    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:27.405664    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:27.903941    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:28.403884    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:28.903901    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:29.404487    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:29.903852    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:30.404685    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:30.903890    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:31.403753    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:31.903926    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:32.404318    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:32.903835    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:33.403834    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:33.903687    1522 kapi.go:107] duration metric: took 25.011601375s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0914 14:43:07.370409    1522 kapi.go:107] duration metric: took 6m0.008648916s to wait for app.kubernetes.io/name=ingress-nginx ...
	W0914 14:43:07.370479    1522 out.go:239] ! Enabling 'ingress' returned an error: running callbacks: [waiting for app.kubernetes.io/name=ingress-nginx pods: context deadline exceeded]
	I0914 14:43:07.418192    1522 kapi.go:107] duration metric: took 6m0.013534334s to wait for kubernetes.io/minikube-addons=registry ...
	W0914 14:43:07.418227    1522 out.go:239] ! Enabling 'registry' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=registry pods: context deadline exceeded]
	I0914 14:43:07.425587    1522 out.go:177] * Enabled addons: inspektor-gadget, volumesnapshots, cloud-spanner, storage-provisioner, ingress-dns, default-storageclass, metrics-server, gcp-auth, csi-hostpath-driver
	I0914 14:43:07.433636    1522 addons.go:502] enable addons completed in 6m0.084906709s: enabled=[inspektor-gadget volumesnapshots cloud-spanner storage-provisioner ingress-dns default-storageclass metrics-server gcp-auth csi-hostpath-driver]
	I0914 14:43:07.433650    1522 start.go:233] waiting for cluster config update ...
	I0914 14:43:07.433664    1522 start.go:242] writing updated cluster config ...
	I0914 14:43:07.433996    1522 ssh_runner.go:195] Run: rm -f paused
	I0914 14:43:07.464084    1522 start.go:600] kubectl: 1.27.2, cluster: 1.28.1 (minor skew: 1)
	I0914 14:43:07.467672    1522 out.go:177] * Done! kubectl is now configured to use "addons-388000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Thu 2023-09-14 21:36:36 UTC, ends at Thu 2023-09-14 21:56:45 UTC. --
	Sep 14 21:37:28 addons-388000 cri-dockerd[1056]: time="2023-09-14T21:37:28Z" level=info msg="Stop pulling image registry.k8s.io/sig-storage/livenessprobe:v2.8.0@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0: Status: Downloaded newer image for registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0"
	Sep 14 21:37:28 addons-388000 dockerd[1162]: time="2023-09-14T21:37:28.601133366Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 14 21:37:28 addons-388000 dockerd[1162]: time="2023-09-14T21:37:28.601186991Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 14 21:37:28 addons-388000 dockerd[1162]: time="2023-09-14T21:37:28.601201491Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 14 21:37:28 addons-388000 dockerd[1162]: time="2023-09-14T21:37:28.601212200Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 14 21:37:28 addons-388000 dockerd[1156]: time="2023-09-14T21:37:28.692071408Z" level=warning msg="reference for unknown type: " digest="sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8" remote="registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8"
	Sep 14 21:37:31 addons-388000 cri-dockerd[1056]: time="2023-09-14T21:37:31Z" level=info msg="Stop pulling image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8: Status: Downloaded newer image for registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8"
	Sep 14 21:37:31 addons-388000 dockerd[1162]: time="2023-09-14T21:37:31.232372201Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 14 21:37:31 addons-388000 dockerd[1162]: time="2023-09-14T21:37:31.232402326Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 14 21:37:31 addons-388000 dockerd[1162]: time="2023-09-14T21:37:31.232412909Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 14 21:37:31 addons-388000 dockerd[1162]: time="2023-09-14T21:37:31.232417493Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 14 21:37:31 addons-388000 dockerd[1156]: time="2023-09-14T21:37:31.325578326Z" level=warning msg="reference for unknown type: " digest="sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f" remote="registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f"
	Sep 14 21:37:33 addons-388000 cri-dockerd[1056]: time="2023-09-14T21:37:33Z" level=info msg="Stop pulling image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f: Status: Downloaded newer image for registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f"
	Sep 14 21:37:33 addons-388000 dockerd[1162]: time="2023-09-14T21:37:33.503964160Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 14 21:37:33 addons-388000 dockerd[1162]: time="2023-09-14T21:37:33.503991702Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 14 21:37:33 addons-388000 dockerd[1162]: time="2023-09-14T21:37:33.504000744Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 14 21:37:33 addons-388000 dockerd[1162]: time="2023-09-14T21:37:33.504006994Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 14 21:55:14 addons-388000 dockerd[1162]: time="2023-09-14T21:55:14.601399406Z" level=info msg="shim disconnected" id=e99b5961a5b90d9d962226a3f942a728e507531402beb7414a01fa9d79554d64 namespace=moby
	Sep 14 21:55:14 addons-388000 dockerd[1162]: time="2023-09-14T21:55:14.601432322Z" level=warning msg="cleaning up after shim disconnected" id=e99b5961a5b90d9d962226a3f942a728e507531402beb7414a01fa9d79554d64 namespace=moby
	Sep 14 21:55:14 addons-388000 dockerd[1162]: time="2023-09-14T21:55:14.601436822Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 14 21:55:14 addons-388000 dockerd[1156]: time="2023-09-14T21:55:14.601734604Z" level=info msg="ignoring event" container=e99b5961a5b90d9d962226a3f942a728e507531402beb7414a01fa9d79554d64 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 14 21:55:14 addons-388000 dockerd[1156]: time="2023-09-14T21:55:14.667931603Z" level=info msg="ignoring event" container=40cdfd9e591d6d222ceba6780cb17bfc909ac2ecbb55e2501c67b4a00e7499a9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 14 21:55:14 addons-388000 dockerd[1162]: time="2023-09-14T21:55:14.668071849Z" level=info msg="shim disconnected" id=40cdfd9e591d6d222ceba6780cb17bfc909ac2ecbb55e2501c67b4a00e7499a9 namespace=moby
	Sep 14 21:55:14 addons-388000 dockerd[1162]: time="2023-09-14T21:55:14.668117056Z" level=warning msg="cleaning up after shim disconnected" id=40cdfd9e591d6d222ceba6780cb17bfc909ac2ecbb55e2501c67b4a00e7499a9 namespace=moby
	Sep 14 21:55:14 addons-388000 dockerd[1162]: time="2023-09-14T21:55:14.668121222Z" level=info msg="cleaning up dead shim" namespace=moby
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID
	c6e7158ec87e6       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          19 minutes ago      Running             csi-snapshotter                          0                   23a9864c5e7a2
	8fbd96f503108       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          19 minutes ago      Running             csi-provisioner                          0                   23a9864c5e7a2
	5a28f3666ec4d       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            19 minutes ago      Running             liveness-probe                           0                   23a9864c5e7a2
	4a515f3dbd90e       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           19 minutes ago      Running             hostpath                                 0                   23a9864c5e7a2
	726bdbe627b06       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf                                 19 minutes ago      Running             gcp-auth                                 0                   039c490b8ce95
	c5e816aa3fb60       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                19 minutes ago      Running             node-driver-registrar                    0                   23a9864c5e7a2
	0574ef72c784a       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              19 minutes ago      Running             csi-resizer                              0                   928188ebbbe5c
	0af4f9c858980       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   19 minutes ago      Running             csi-external-health-monitor-controller   0                   23a9864c5e7a2
	9a3fe3bf72dd7       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             19 minutes ago      Running             csi-attacher                             0                   aec96cfd028be
	1f519e69776da       97e04611ad434                                                                                                                                19 minutes ago      Running             coredns                                  0                   6b82b02e01da4
	c36ca5fc76214       812f5241df7fd                                                                                                                                19 minutes ago      Running             kube-proxy                               0                   24118a5be8efa
	af45960dc2d7c       b4a5a57e99492                                                                                                                                19 minutes ago      Running             kube-scheduler                           0                   6dde63050aa99
	39f78945ed576       b29fb62480892                                                                                                                                19 minutes ago      Running             kube-apiserver                           0                   a02ab403a50ec
	f2717f532e595       8b6e1980b7584                                                                                                                                19 minutes ago      Running             kube-controller-manager                  0                   834af4f99b3bc
	5a63d0e8296f4       9cdd6470f48c8                                                                                                                                19 minutes ago      Running             etcd                                     0                   b2289ff5c077b
	
	* 
	* ==> coredns [1f519e69776d] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	[INFO] Reloading complete
	
	* 
	* ==> describe nodes <==
	* Name:               addons-388000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-388000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d7492f2ae2d9b6e62b385ffcd97ebad62c645e82
	                    minikube.k8s.io/name=addons-388000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_14T14_36_54_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-388000
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-388000"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 14 Sep 2023 21:36:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-388000
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 14 Sep 2023 21:56:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 14 Sep 2023 21:53:15 +0000   Thu, 14 Sep 2023 21:36:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 14 Sep 2023 21:53:15 +0000   Thu, 14 Sep 2023 21:36:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 14 Sep 2023 21:53:15 +0000   Thu, 14 Sep 2023 21:36:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 14 Sep 2023 21:53:15 +0000   Thu, 14 Sep 2023 21:36:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.2
	  Hostname:    addons-388000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	System Info:
	  Machine ID:                 ca8cf67d46214b1fbc59c14cf3d2d66f
	  System UUID:                ca8cf67d46214b1fbc59c14cf3d2d66f
	  Boot ID:                    386c1075-3226-461a-ab43-e16ad465a6c4
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.6
	  Kubelet Version:            v1.28.1
	  Kube-Proxy Version:         v1.28.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  gcp-auth                    gcp-auth-d4c87556c-pjjjl                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 coredns-5dd5756b68-psn28                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     19m
	  kube-system                 csi-hostpath-attacher-0                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 csi-hostpath-resizer-0                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 csi-hostpathplugin-b5k2m                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 etcd-addons-388000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         19m
	  kube-system                 kube-apiserver-addons-388000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-controller-manager-addons-388000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-proxy-8pbsf                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-scheduler-addons-388000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 19m   kube-proxy       
	  Normal  Starting                 19m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  19m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  19m   kubelet          Node addons-388000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m   kubelet          Node addons-388000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m   kubelet          Node addons-388000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                19m   kubelet          Node addons-388000 status is now: NodeReady
	  Normal  RegisteredNode           19m   node-controller  Node addons-388000 event: Registered Node addons-388000 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000000] KASLR disabled due to lack of seed
	[  +0.645091] EINJ: EINJ table not found.
	[  +0.506039] systemd-fstab-generator[117]: Ignoring "noauto" for root device
	[  +0.043466] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000824] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +4.211816] systemd-fstab-generator[481]: Ignoring "noauto" for root device
	[  +0.087452] systemd-fstab-generator[492]: Ignoring "noauto" for root device
	[  +0.529791] systemd-fstab-generator[754]: Ignoring "noauto" for root device
	[  +0.178247] systemd-fstab-generator[791]: Ignoring "noauto" for root device
	[  +0.078699] systemd-fstab-generator[802]: Ignoring "noauto" for root device
	[  +0.082696] systemd-fstab-generator[815]: Ignoring "noauto" for root device
	[  +1.243164] systemd-fstab-generator[973]: Ignoring "noauto" for root device
	[  +0.079535] systemd-fstab-generator[984]: Ignoring "noauto" for root device
	[  +0.081510] systemd-fstab-generator[995]: Ignoring "noauto" for root device
	[  +0.082103] systemd-fstab-generator[1006]: Ignoring "noauto" for root device
	[  +0.084560] systemd-fstab-generator[1049]: Ignoring "noauto" for root device
	[  +2.579762] systemd-fstab-generator[1149]: Ignoring "noauto" for root device
	[  +2.146558] kauditd_printk_skb: 53 callbacks suppressed
	[  +2.177617] systemd-fstab-generator[1466]: Ignoring "noauto" for root device
	[  +5.135787] systemd-fstab-generator[2333]: Ignoring "noauto" for root device
	[Sep14 21:37] kauditd_printk_skb: 41 callbacks suppressed
	[  +1.224924] TCP: eth0: Driver has suspect GRO implementation, TCP performance may be compromised.
	[  +5.069347] kauditd_printk_skb: 47 callbacks suppressed
	[  +8.062309] kauditd_printk_skb: 6 callbacks suppressed
	[  +6.104989] kauditd_printk_skb: 5 callbacks suppressed
	
	* 
	* ==> etcd [5a63d0e8296f] <==
	* {"level":"info","ts":"2023-09-14T21:36:50.716133Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-14T21:36:51.510944Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 is starting a new election at term 1"}
	{"level":"info","ts":"2023-09-14T21:36:51.511016Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-09-14T21:36:51.51104Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 received MsgPreVoteResp from c46d288d2fcb0590 at term 1"}
	{"level":"info","ts":"2023-09-14T21:36:51.511075Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 became candidate at term 2"}
	{"level":"info","ts":"2023-09-14T21:36:51.511094Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 received MsgVoteResp from c46d288d2fcb0590 at term 2"}
	{"level":"info","ts":"2023-09-14T21:36:51.511114Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 became leader at term 2"}
	{"level":"info","ts":"2023-09-14T21:36:51.511137Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c46d288d2fcb0590 elected leader c46d288d2fcb0590 at term 2"}
	{"level":"info","ts":"2023-09-14T21:36:51.511827Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"c46d288d2fcb0590","local-member-attributes":"{Name:addons-388000 ClientURLs:[https://192.168.105.2:2379]}","request-path":"/0/members/c46d288d2fcb0590/attributes","cluster-id":"6e03e7863b4f9c54","publish-timeout":"7s"}
	{"level":"info","ts":"2023-09-14T21:36:51.511953Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-14T21:36:51.512015Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-14T21:36:51.512221Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-14T21:36:51.512251Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-09-14T21:36:51.512273Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6e03e7863b4f9c54","local-member-id":"c46d288d2fcb0590","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-14T21:36:51.512322Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-14T21:36:51.512345Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-14T21:36:51.512368Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-14T21:36:51.512743Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.2:2379"}
	{"level":"info","ts":"2023-09-14T21:36:51.513582Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-09-14T21:46:51.09248Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":801}
	{"level":"info","ts":"2023-09-14T21:46:51.094243Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":801,"took":"1.342212ms","hash":1083412012}
	{"level":"info","ts":"2023-09-14T21:46:51.094259Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1083412012,"revision":801,"compact-revision":-1}
	{"level":"info","ts":"2023-09-14T21:51:51.097381Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":951}
	{"level":"info","ts":"2023-09-14T21:51:51.097919Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":951,"took":"269.454µs","hash":1387439011}
	{"level":"info","ts":"2023-09-14T21:51:51.097932Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1387439011,"revision":951,"compact-revision":801}
	
	* 
	* ==> gcp-auth [726bdbe627b0] <==
	* 2023/09/14 21:37:25 GCP Auth Webhook started!
	
	* 
	* ==> kernel <==
	*  21:56:46 up 20 min,  0 users,  load average: 0.26, 0.20, 0.18
	Linux addons-388000 5.10.57 #1 SMP PREEMPT Wed Sep 13 19:05:16 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [39f78945ed57] <==
	* I0914 21:44:51.695063       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0914 21:45:51.695412       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0914 21:46:51.695845       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0914 21:46:51.765612       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0914 21:47:51.695437       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0914 21:48:51.695685       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0914 21:49:51.695503       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0914 21:50:51.694805       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0914 21:51:51.695866       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0914 21:51:51.770993       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0914 21:52:51.695289       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0914 21:53:00.844205       1 watcher.go:245] watch chan error: etcdserver: mvcc: required revision has been compacted
	I0914 21:53:51.695037       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0914 21:54:51.695833       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0914 21:55:13.478412       1 controller.go:159] removing "v1beta1.metrics.k8s.io" from AggregationController failed with: resource not found
	E0914 21:55:19.745518       1 handler_proxy.go:137] error resolving kube-system/metrics-server: service "metrics-server" not found
	W0914 21:55:19.745550       1 handler_proxy.go:93] no RequestInfo found in the context
	E0914 21:55:19.745571       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0914 21:55:19.745579       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0914 21:56:19.746684       1 handler_proxy.go:137] error resolving kube-system/metrics-server: service "metrics-server" not found
	W0914 21:56:19.746702       1 handler_proxy.go:93] no RequestInfo found in the context
	E0914 21:56:19.746726       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0914 21:56:19.746731       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [f2717f532e59] <==
	* I0914 21:37:18.703329       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-7c66d45ddc" duration="5.130125ms"
	I0914 21:37:18.703353       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-7c66d45ddc" duration="13.667µs"
	I0914 21:37:21.717272       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0914 21:37:21.725039       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0914 21:37:22.734779       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0914 21:37:22.813793       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0914 21:37:23.747243       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0914 21:37:23.753327       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0914 21:37:23.816708       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0914 21:37:23.819180       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0914 21:37:23.821789       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0914 21:37:23.822117       1 event.go:307] "Event occurred" object="gcp-auth/gcp-auth-certs-create" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0914 21:37:23.822779       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0914 21:37:24.756088       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0914 21:37:24.759099       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0914 21:37:24.761716       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0914 21:37:24.762180       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0914 21:37:24.762196       1 event.go:307] "Event occurred" object="gcp-auth/gcp-auth-certs-patch" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0914 21:37:25.770929       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="gcp-auth/gcp-auth-d4c87556c" duration="2.310416ms"
	I0914 21:37:25.771746       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="gcp-auth/gcp-auth-d4c87556c" duration="11.791µs"
	I0914 21:37:53.005858       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0914 21:37:53.014644       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0914 21:37:54.003849       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0914 21:37:54.024141       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0914 21:55:13.485728       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-7c66d45ddc" duration="2.166µs"
	
	* 
	* ==> kube-proxy [c36ca5fc7621] <==
	* I0914 21:37:08.522854       1 server_others.go:69] "Using iptables proxy"
	I0914 21:37:08.529066       1 node.go:141] Successfully retrieved node IP: 192.168.105.2
	I0914 21:37:08.587870       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0914 21:37:08.587883       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0914 21:37:08.588459       1 server_others.go:152] "Using iptables Proxier"
	I0914 21:37:08.588486       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0914 21:37:08.588572       1 server.go:846] "Version info" version="v1.28.1"
	I0914 21:37:08.588578       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 21:37:08.589296       1 config.go:188] "Starting service config controller"
	I0914 21:37:08.589305       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0914 21:37:08.589315       1 config.go:97] "Starting endpoint slice config controller"
	I0914 21:37:08.589317       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0914 21:37:08.589522       1 config.go:315] "Starting node config controller"
	I0914 21:37:08.589524       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0914 21:37:08.690794       1 shared_informer.go:318] Caches are synced for node config
	I0914 21:37:08.690821       1 shared_informer.go:318] Caches are synced for service config
	I0914 21:37:08.690838       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [af45960dc2d7] <==
	* E0914 21:36:52.199210       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0914 21:36:52.199206       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0914 21:36:52.199236       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0914 21:36:52.199258       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0914 21:36:52.199265       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0914 21:36:52.199278       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0914 21:36:52.199281       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0914 21:36:52.199189       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0914 21:36:52.199288       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0914 21:36:52.199260       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0914 21:36:52.199247       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0914 21:36:52.199323       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0914 21:36:53.095318       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0914 21:36:53.095337       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0914 21:36:53.142146       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0914 21:36:53.142164       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0914 21:36:53.158912       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0914 21:36:53.159021       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0914 21:36:53.162940       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0914 21:36:53.163031       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0914 21:36:53.206403       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0914 21:36:53.206481       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0914 21:36:53.209535       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0914 21:36:53.209549       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0914 21:36:53.797539       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Thu 2023-09-14 21:36:36 UTC, ends at Thu 2023-09-14 21:56:46 UTC. --
	Sep 14 21:52:54 addons-388000 kubelet[2339]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 14 21:52:54 addons-388000 kubelet[2339]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 14 21:53:54 addons-388000 kubelet[2339]: E0914 21:53:54.524934    2339 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 14 21:53:54 addons-388000 kubelet[2339]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 14 21:53:54 addons-388000 kubelet[2339]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 14 21:53:54 addons-388000 kubelet[2339]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 14 21:54:54 addons-388000 kubelet[2339]: E0914 21:54:54.528203    2339 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 14 21:54:54 addons-388000 kubelet[2339]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 14 21:54:54 addons-388000 kubelet[2339]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 14 21:54:54 addons-388000 kubelet[2339]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 14 21:55:14 addons-388000 kubelet[2339]: I0914 21:55:14.853529    2339 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nrp9b\" (UniqueName: \"kubernetes.io/projected/7b539063-f45b-4a15-97e7-6713ea57e519-kube-api-access-nrp9b\") pod \"7b539063-f45b-4a15-97e7-6713ea57e519\" (UID: \"7b539063-f45b-4a15-97e7-6713ea57e519\") "
	Sep 14 21:55:14 addons-388000 kubelet[2339]: I0914 21:55:14.853570    2339 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/7b539063-f45b-4a15-97e7-6713ea57e519-tmp-dir\") pod \"7b539063-f45b-4a15-97e7-6713ea57e519\" (UID: \"7b539063-f45b-4a15-97e7-6713ea57e519\") "
	Sep 14 21:55:14 addons-388000 kubelet[2339]: I0914 21:55:14.853720    2339 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b539063-f45b-4a15-97e7-6713ea57e519-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "7b539063-f45b-4a15-97e7-6713ea57e519" (UID: "7b539063-f45b-4a15-97e7-6713ea57e519"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Sep 14 21:55:14 addons-388000 kubelet[2339]: I0914 21:55:14.856486    2339 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b539063-f45b-4a15-97e7-6713ea57e519-kube-api-access-nrp9b" (OuterVolumeSpecName: "kube-api-access-nrp9b") pod "7b539063-f45b-4a15-97e7-6713ea57e519" (UID: "7b539063-f45b-4a15-97e7-6713ea57e519"). InnerVolumeSpecName "kube-api-access-nrp9b". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 14 21:55:14 addons-388000 kubelet[2339]: I0914 21:55:14.955474    2339 reconciler_common.go:300] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/7b539063-f45b-4a15-97e7-6713ea57e519-tmp-dir\") on node \"addons-388000\" DevicePath \"\""
	Sep 14 21:55:14 addons-388000 kubelet[2339]: I0914 21:55:14.955491    2339 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-nrp9b\" (UniqueName: \"kubernetes.io/projected/7b539063-f45b-4a15-97e7-6713ea57e519-kube-api-access-nrp9b\") on node \"addons-388000\" DevicePath \"\""
	Sep 14 21:55:15 addons-388000 kubelet[2339]: I0914 21:55:15.230909    2339 scope.go:117] "RemoveContainer" containerID="e99b5961a5b90d9d962226a3f942a728e507531402beb7414a01fa9d79554d64"
	Sep 14 21:55:15 addons-388000 kubelet[2339]: I0914 21:55:15.241455    2339 scope.go:117] "RemoveContainer" containerID="e99b5961a5b90d9d962226a3f942a728e507531402beb7414a01fa9d79554d64"
	Sep 14 21:55:15 addons-388000 kubelet[2339]: E0914 21:55:15.242184    2339 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: e99b5961a5b90d9d962226a3f942a728e507531402beb7414a01fa9d79554d64" containerID="e99b5961a5b90d9d962226a3f942a728e507531402beb7414a01fa9d79554d64"
	Sep 14 21:55:15 addons-388000 kubelet[2339]: I0914 21:55:15.242221    2339 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"e99b5961a5b90d9d962226a3f942a728e507531402beb7414a01fa9d79554d64"} err="failed to get container status \"e99b5961a5b90d9d962226a3f942a728e507531402beb7414a01fa9d79554d64\": rpc error: code = Unknown desc = Error response from daemon: No such container: e99b5961a5b90d9d962226a3f942a728e507531402beb7414a01fa9d79554d64"
	Sep 14 21:55:16 addons-388000 kubelet[2339]: I0914 21:55:16.518655    2339 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="7b539063-f45b-4a15-97e7-6713ea57e519" path="/var/lib/kubelet/pods/7b539063-f45b-4a15-97e7-6713ea57e519/volumes"
	Sep 14 21:55:54 addons-388000 kubelet[2339]: E0914 21:55:54.525045    2339 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 14 21:55:54 addons-388000 kubelet[2339]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 14 21:55:54 addons-388000 kubelet[2339]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 14 21:55:54 addons-388000 kubelet[2339]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p addons-388000 -n addons-388000
helpers_test.go:261: (dbg) Run:  kubectl --context addons-388000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (0.76s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (480.83s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
addons_test.go:814: ***** TestAddons/parallel/InspektorGadget: pod "k8s-app=gadget" failed to start within 8m0s: context deadline exceeded ****
addons_test.go:814: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p addons-388000 -n addons-388000
addons_test.go:814: TestAddons/parallel/InspektorGadget: showing logs for failed pods as of 2023-09-14 15:03:13.589082 -0700 PDT m=+1660.787828667
addons_test.go:815: failed waiting for inspektor-gadget pod: k8s-app=gadget within 8m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p addons-388000 -n addons-388000
helpers_test.go:244: <<< TestAddons/parallel/InspektorGadget FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/InspektorGadget]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p addons-388000 logs -n 25
helpers_test.go:252: TestAddons/parallel/InspektorGadget logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-917000 | jenkins | v1.31.2 | 14 Sep 23 14:35 PDT |                     |
	|         | -p download-only-917000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| start   | -o=json --download-only        | download-only-917000 | jenkins | v1.31.2 | 14 Sep 23 14:35 PDT |                     |
	|         | -p download-only-917000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.31.2 | 14 Sep 23 14:36 PDT | 14 Sep 23 14:36 PDT |
	| delete  | -p download-only-917000        | download-only-917000 | jenkins | v1.31.2 | 14 Sep 23 14:36 PDT | 14 Sep 23 14:36 PDT |
	| delete  | -p download-only-917000        | download-only-917000 | jenkins | v1.31.2 | 14 Sep 23 14:36 PDT | 14 Sep 23 14:36 PDT |
	| start   | --download-only -p             | binary-mirror-231000 | jenkins | v1.31.2 | 14 Sep 23 14:36 PDT |                     |
	|         | binary-mirror-231000           |                      |         |         |                     |                     |
	|         | --alsologtostderr              |                      |         |         |                     |                     |
	|         | --binary-mirror                |                      |         |         |                     |                     |
	|         | http://127.0.0.1:49379         |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-231000        | binary-mirror-231000 | jenkins | v1.31.2 | 14 Sep 23 14:36 PDT | 14 Sep 23 14:36 PDT |
	| start   | -p addons-388000               | addons-388000        | jenkins | v1.31.2 | 14 Sep 23 14:36 PDT | 14 Sep 23 14:43 PDT |
	|         | --wait=true --memory=4000      |                      |         |         |                     |                     |
	|         | --alsologtostderr              |                      |         |         |                     |                     |
	|         | --addons=registry              |                      |         |         |                     |                     |
	|         | --addons=metrics-server        |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots       |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver   |                      |         |         |                     |                     |
	|         | --addons=gcp-auth              |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner         |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget      |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|         | --addons=ingress               |                      |         |         |                     |                     |
	|         | --addons=ingress-dns           |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p       | addons-388000        | jenkins | v1.31.2 | 14 Sep 23 14:55 PDT |                     |
	|         | addons-388000                  |                      |         |         |                     |                     |
	| addons  | addons-388000 addons           | addons-388000        | jenkins | v1.31.2 | 14 Sep 23 14:55 PDT | 14 Sep 23 14:55 PDT |
	|         | disable metrics-server         |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                      |         |         |                     |                     |
	| addons  | enable headlamp                | addons-388000        | jenkins | v1.31.2 | 14 Sep 23 14:56 PDT | 14 Sep 23 14:56 PDT |
	|         | -p addons-388000               |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/14 14:36:23
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.21.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0914 14:36:23.572515    1522 out.go:296] Setting OutFile to fd 1 ...
	I0914 14:36:23.572636    1522 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 14:36:23.572639    1522 out.go:309] Setting ErrFile to fd 2...
	I0914 14:36:23.572642    1522 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 14:36:23.572752    1522 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17243-1006/.minikube/bin
	I0914 14:36:23.573756    1522 out.go:303] Setting JSON to false
	I0914 14:36:23.588610    1522 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":357,"bootTime":1694727026,"procs":391,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0914 14:36:23.588683    1522 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0914 14:36:23.593630    1522 out.go:177] * [addons-388000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0914 14:36:23.600459    1522 out.go:177]   - MINIKUBE_LOCATION=17243
	I0914 14:36:23.600497    1522 notify.go:220] Checking for updates...
	I0914 14:36:23.603591    1522 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17243-1006/kubeconfig
	I0914 14:36:23.606425    1522 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 14:36:23.609496    1522 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 14:36:23.612541    1522 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17243-1006/.minikube
	I0914 14:36:23.615423    1522 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 14:36:23.618648    1522 driver.go:373] Setting default libvirt URI to qemu:///system
	I0914 14:36:23.622479    1522 out.go:177] * Using the qemu2 driver based on user configuration
	I0914 14:36:23.629482    1522 start.go:298] selected driver: qemu2
	I0914 14:36:23.629487    1522 start.go:902] validating driver "qemu2" against <nil>
	I0914 14:36:23.629493    1522 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 14:36:23.631382    1522 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0914 14:36:23.634542    1522 out.go:177] * Automatically selected the socket_vmnet network
	I0914 14:36:23.637548    1522 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 14:36:23.637570    1522 cni.go:84] Creating CNI manager for ""
	I0914 14:36:23.637578    1522 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0914 14:36:23.637583    1522 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0914 14:36:23.637590    1522 start_flags.go:321] config:
	{Name:addons-388000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:addons-388000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID
:0 AutoPauseInterval:1m0s}
	I0914 14:36:23.641729    1522 iso.go:125] acquiring lock: {Name:mkd5b68b252c18146a825ff5365883948b3c6983 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 14:36:23.649492    1522 out.go:177] * Starting control plane node addons-388000 in cluster addons-388000
	I0914 14:36:23.653459    1522 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0914 14:36:23.653478    1522 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0914 14:36:23.653492    1522 cache.go:57] Caching tarball of preloaded images
	I0914 14:36:23.653557    1522 preload.go:174] Found /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0914 14:36:23.653564    1522 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0914 14:36:23.653811    1522 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/config.json ...
	I0914 14:36:23.653825    1522 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/config.json: {Name:mk9010c5dfb0ad4a966bb29118112217ba3b6cf7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 14:36:23.654041    1522 start.go:365] acquiring machines lock for addons-388000: {Name:mk831f2bd5a9a0c932c8a59972319d68edb53d07 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 14:36:23.654147    1522 start.go:369] acquired machines lock for "addons-388000" in 99.875µs
	I0914 14:36:23.654159    1522 start.go:93] Provisioning new machine with config: &{Name:addons-388000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.1 ClusterName:addons-388000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 14:36:23.654194    1522 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 14:36:23.662516    1522 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0914 14:36:23.982709    1522 start.go:159] libmachine.API.Create for "addons-388000" (driver="qemu2")
	I0914 14:36:23.982756    1522 client.go:168] LocalClient.Create starting
	I0914 14:36:23.982899    1522 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/ca.pem
	I0914 14:36:24.329911    1522 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/cert.pem
	I0914 14:36:24.425142    1522 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17243-1006/.minikube/cache/iso/arm64/minikube-v1.31.0-1694625400-17243-arm64.iso...
	I0914 14:36:24.784281    1522 main.go:141] libmachine: Creating SSH key...
	I0914 14:36:25.013863    1522 main.go:141] libmachine: Creating Disk image...
	I0914 14:36:25.013874    1522 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 14:36:25.014143    1522 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/disk.qcow2.raw /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/disk.qcow2
	I0914 14:36:25.048599    1522 main.go:141] libmachine: STDOUT: 
	I0914 14:36:25.048634    1522 main.go:141] libmachine: STDERR: 
	I0914 14:36:25.048701    1522 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/disk.qcow2 +20000M
	I0914 14:36:25.056105    1522 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 14:36:25.056122    1522 main.go:141] libmachine: STDERR: 
	I0914 14:36:25.056141    1522 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/disk.qcow2
	I0914 14:36:25.056150    1522 main.go:141] libmachine: Starting QEMU VM...
	I0914 14:36:25.056194    1522 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:ab:b1:c2:6f:25 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/disk.qcow2
	I0914 14:36:25.122275    1522 main.go:141] libmachine: STDOUT: 
	I0914 14:36:25.122322    1522 main.go:141] libmachine: STDERR: 
	I0914 14:36:25.122327    1522 main.go:141] libmachine: Attempt 0
	I0914 14:36:25.122346    1522 main.go:141] libmachine: Searching for fa:ab:b1:c2:6f:25 in /var/db/dhcpd_leases ...
	I0914 14:36:27.123500    1522 main.go:141] libmachine: Attempt 1
	I0914 14:36:27.123581    1522 main.go:141] libmachine: Searching for fa:ab:b1:c2:6f:25 in /var/db/dhcpd_leases ...
	I0914 14:36:29.124764    1522 main.go:141] libmachine: Attempt 2
	I0914 14:36:29.124788    1522 main.go:141] libmachine: Searching for fa:ab:b1:c2:6f:25 in /var/db/dhcpd_leases ...
	I0914 14:36:31.125900    1522 main.go:141] libmachine: Attempt 3
	I0914 14:36:31.125919    1522 main.go:141] libmachine: Searching for fa:ab:b1:c2:6f:25 in /var/db/dhcpd_leases ...
	I0914 14:36:33.126934    1522 main.go:141] libmachine: Attempt 4
	I0914 14:36:33.126945    1522 main.go:141] libmachine: Searching for fa:ab:b1:c2:6f:25 in /var/db/dhcpd_leases ...
	I0914 14:36:35.127988    1522 main.go:141] libmachine: Attempt 5
	I0914 14:36:35.128006    1522 main.go:141] libmachine: Searching for fa:ab:b1:c2:6f:25 in /var/db/dhcpd_leases ...
	I0914 14:36:37.130061    1522 main.go:141] libmachine: Attempt 6
	I0914 14:36:37.130089    1522 main.go:141] libmachine: Searching for fa:ab:b1:c2:6f:25 in /var/db/dhcpd_leases ...
	I0914 14:36:37.130226    1522 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0914 14:36:37.130272    1522 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:fa:ab:b1:c2:6f:25 ID:1,fa:ab:b1:c2:6f:25 Lease:0x6504ce64}
	I0914 14:36:37.130284    1522 main.go:141] libmachine: Found match: fa:ab:b1:c2:6f:25
	I0914 14:36:37.130296    1522 main.go:141] libmachine: IP: 192.168.105.2
	I0914 14:36:37.130304    1522 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.2)...
	I0914 14:36:39.152264    1522 machine.go:88] provisioning docker machine ...
	I0914 14:36:39.152328    1522 buildroot.go:166] provisioning hostname "addons-388000"
	I0914 14:36:39.153898    1522 main.go:141] libmachine: Using SSH client type: native
	I0914 14:36:39.154765    1522 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102894760] 0x102896ed0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0914 14:36:39.154789    1522 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-388000 && echo "addons-388000" | sudo tee /etc/hostname
	I0914 14:36:39.254406    1522 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-388000
	
	I0914 14:36:39.254547    1522 main.go:141] libmachine: Using SSH client type: native
	I0914 14:36:39.254974    1522 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102894760] 0x102896ed0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0914 14:36:39.254987    1522 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-388000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-388000/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-388000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 14:36:39.336783    1522 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 14:36:39.336807    1522 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17243-1006/.minikube CaCertPath:/Users/jenkins/minikube-integration/17243-1006/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17243-1006/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17243-1006/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17243-1006/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17243-1006/.minikube}
	I0914 14:36:39.336834    1522 buildroot.go:174] setting up certificates
	I0914 14:36:39.336842    1522 provision.go:83] configureAuth start
	I0914 14:36:39.336850    1522 provision.go:138] copyHostCerts
	I0914 14:36:39.337062    1522 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17243-1006/.minikube/ca.pem (1082 bytes)
	I0914 14:36:39.337458    1522 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17243-1006/.minikube/cert.pem (1123 bytes)
	I0914 14:36:39.337624    1522 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17243-1006/.minikube/key.pem (1675 bytes)
	I0914 14:36:39.337823    1522 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17243-1006/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17243-1006/.minikube/certs/ca-key.pem org=jenkins.addons-388000 san=[192.168.105.2 192.168.105.2 localhost 127.0.0.1 minikube addons-388000]
	I0914 14:36:39.438902    1522 provision.go:172] copyRemoteCerts
	I0914 14:36:39.438967    1522 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 14:36:39.438977    1522 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/id_rsa Username:docker}
	I0914 14:36:39.475382    1522 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0914 14:36:39.482935    1522 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0914 14:36:39.490611    1522 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0914 14:36:39.498058    1522 provision.go:86] duration metric: configureAuth took 161.21375ms
	I0914 14:36:39.498072    1522 buildroot.go:189] setting minikube options for container-runtime
	I0914 14:36:39.498194    1522 config.go:182] Loaded profile config "addons-388000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0914 14:36:39.498238    1522 main.go:141] libmachine: Using SSH client type: native
	I0914 14:36:39.498454    1522 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102894760] 0x102896ed0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0914 14:36:39.498461    1522 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0914 14:36:39.568371    1522 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0914 14:36:39.568380    1522 buildroot.go:70] root file system type: tmpfs
	I0914 14:36:39.568444    1522 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0914 14:36:39.568493    1522 main.go:141] libmachine: Using SSH client type: native
	I0914 14:36:39.568758    1522 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102894760] 0x102896ed0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0914 14:36:39.568795    1522 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0914 14:36:39.642658    1522 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0914 14:36:39.642714    1522 main.go:141] libmachine: Using SSH client type: native
	I0914 14:36:39.642984    1522 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102894760] 0x102896ed0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0914 14:36:39.642994    1522 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0914 14:36:40.018079    1522 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0914 14:36:40.018095    1522 machine.go:91] provisioned docker machine in 865.825208ms
	I0914 14:36:40.018101    1522 client.go:171] LocalClient.Create took 16.035747292s
	I0914 14:36:40.018112    1522 start.go:167] duration metric: libmachine.API.Create for "addons-388000" took 16.035815708s
	I0914 14:36:40.018117    1522 start.go:300] post-start starting for "addons-388000" (driver="qemu2")
	I0914 14:36:40.018121    1522 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 14:36:40.018186    1522 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 14:36:40.018197    1522 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/id_rsa Username:docker}
	I0914 14:36:40.056512    1522 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 14:36:40.057796    1522 info.go:137] Remote host: Buildroot 2021.02.12
	I0914 14:36:40.057807    1522 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17243-1006/.minikube/addons for local assets ...
	I0914 14:36:40.057875    1522 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17243-1006/.minikube/files for local assets ...
	I0914 14:36:40.057901    1522 start.go:303] post-start completed in 39.782666ms
	I0914 14:36:40.058218    1522 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/config.json ...
	I0914 14:36:40.058366    1522 start.go:128] duration metric: createHost completed in 16.404584042s
	I0914 14:36:40.058389    1522 main.go:141] libmachine: Using SSH client type: native
	I0914 14:36:40.058608    1522 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102894760] 0x102896ed0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0914 14:36:40.058612    1522 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0914 14:36:40.126242    1522 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694727400.596628044
	
	I0914 14:36:40.126252    1522 fix.go:206] guest clock: 1694727400.596628044
	I0914 14:36:40.126256    1522 fix.go:219] Guest: 2023-09-14 14:36:40.596628044 -0700 PDT Remote: 2023-09-14 14:36:40.058369 -0700 PDT m=+16.505601626 (delta=538.259044ms)
	I0914 14:36:40.126267    1522 fix.go:190] guest clock delta is within tolerance: 538.259044ms
	I0914 14:36:40.126272    1522 start.go:83] releasing machines lock for "addons-388000", held for 16.472537s
	I0914 14:36:40.126627    1522 ssh_runner.go:195] Run: cat /version.json
	I0914 14:36:40.126630    1522 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 14:36:40.126636    1522 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/id_rsa Username:docker}
	I0914 14:36:40.126680    1522 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/id_rsa Username:docker}
	I0914 14:36:40.164117    1522 ssh_runner.go:195] Run: systemctl --version
	I0914 14:36:40.279852    1522 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 14:36:40.282756    1522 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 14:36:40.282802    1522 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 14:36:40.290141    1522 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 14:36:40.290164    1522 start.go:469] detecting cgroup driver to use...
	I0914 14:36:40.290325    1522 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 14:36:40.298242    1522 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0914 14:36:40.302485    1522 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0914 14:36:40.306314    1522 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0914 14:36:40.306335    1522 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0914 14:36:40.309906    1522 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0914 14:36:40.313708    1522 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0914 14:36:40.317003    1522 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0914 14:36:40.319988    1522 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 14:36:40.323114    1522 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0914 14:36:40.326593    1522 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 14:36:40.329687    1522 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 14:36:40.332474    1522 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 14:36:40.414020    1522 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0914 14:36:40.421074    1522 start.go:469] detecting cgroup driver to use...
	I0914 14:36:40.421134    1522 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0914 14:36:40.426647    1522 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 14:36:40.431508    1522 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 14:36:40.437031    1522 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 14:36:40.441206    1522 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0914 14:36:40.445778    1522 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0914 14:36:40.494559    1522 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0914 14:36:40.500245    1522 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 14:36:40.506085    1522 ssh_runner.go:195] Run: which cri-dockerd
	I0914 14:36:40.507323    1522 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0914 14:36:40.510306    1522 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0914 14:36:40.515235    1522 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0914 14:36:40.590641    1522 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0914 14:36:40.670685    1522 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0914 14:36:40.670697    1522 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0914 14:36:40.676022    1522 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 14:36:40.753642    1522 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0914 14:36:41.915654    1522 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.162025209s)
	I0914 14:36:41.915719    1522 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0914 14:36:41.996165    1522 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0914 14:36:42.077673    1522 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0914 14:36:42.158787    1522 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 14:36:42.238393    1522 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0914 14:36:42.246223    1522 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 14:36:42.322653    1522 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0914 14:36:42.347035    1522 start.go:516] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0914 14:36:42.347147    1522 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0914 14:36:42.349276    1522 start.go:537] Will wait 60s for crictl version
	I0914 14:36:42.349310    1522 ssh_runner.go:195] Run: which crictl
	I0914 14:36:42.350645    1522 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 14:36:42.367912    1522 start.go:553] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.6
	RuntimeApiVersion:  v1alpha2
	I0914 14:36:42.367994    1522 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0914 14:36:42.377957    1522 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0914 14:36:42.394599    1522 out.go:204] * Preparing Kubernetes v1.28.1 on Docker 24.0.6 ...
	I0914 14:36:42.394744    1522 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0914 14:36:42.396150    1522 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 14:36:42.399678    1522 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0914 14:36:42.399720    1522 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0914 14:36:42.404754    1522 docker.go:636] Got preloaded images: 
	I0914 14:36:42.404761    1522 docker.go:642] registry.k8s.io/kube-apiserver:v1.28.1 wasn't preloaded
	I0914 14:36:42.404801    1522 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0914 14:36:42.407644    1522 ssh_runner.go:195] Run: which lz4
	I0914 14:36:42.408926    1522 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0914 14:36:42.410207    1522 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0914 14:36:42.410221    1522 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (356902558 bytes)
	I0914 14:36:43.758723    1522 docker.go:600] Took 1.349866 seconds to copy over tarball
	I0914 14:36:43.758788    1522 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0914 14:36:44.802481    1522 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.043706042s)
	I0914 14:36:44.802494    1522 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0914 14:36:44.818862    1522 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0914 14:36:44.822486    1522 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0914 14:36:44.827997    1522 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 14:36:44.904406    1522 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0914 14:36:47.070320    1522 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.165952375s)
	I0914 14:36:47.070426    1522 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0914 14:36:47.076673    1522 docker.go:636] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.1
	registry.k8s.io/kube-scheduler:v1.28.1
	registry.k8s.io/kube-proxy:v1.28.1
	registry.k8s.io/kube-controller-manager:v1.28.1
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0914 14:36:47.076684    1522 cache_images.go:84] Images are preloaded, skipping loading
	I0914 14:36:47.076750    1522 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0914 14:36:47.084410    1522 cni.go:84] Creating CNI manager for ""
	I0914 14:36:47.084420    1522 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0914 14:36:47.084443    1522 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0914 14:36:47.084452    1522 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.2 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-388000 NodeName:addons-388000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 14:36:47.084527    1522 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-388000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 14:36:47.084571    1522 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=addons-388000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:addons-388000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0914 14:36:47.084633    1522 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0914 14:36:47.087471    1522 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 14:36:47.087501    1522 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 14:36:47.090481    1522 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I0914 14:36:47.095702    1522 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 14:36:47.100584    1522 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2099 bytes)
	I0914 14:36:47.105532    1522 ssh_runner.go:195] Run: grep 192.168.105.2	control-plane.minikube.internal$ /etc/hosts
	I0914 14:36:47.106963    1522 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 14:36:47.110892    1522 certs.go:56] Setting up /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000 for IP: 192.168.105.2
	I0914 14:36:47.110903    1522 certs.go:190] acquiring lock for shared ca certs: {Name:mkd19d6e2143685b57ba1e0d43c4081bbdb26a79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 14:36:47.111053    1522 certs.go:204] generating minikubeCA CA: /Users/jenkins/minikube-integration/17243-1006/.minikube/ca.key
	I0914 14:36:47.228830    1522 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17243-1006/.minikube/ca.crt ...
	I0914 14:36:47.228840    1522 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17243-1006/.minikube/ca.crt: {Name:mk1c10f9290e336c983838c8c09bb8cd18a9a4c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 14:36:47.229095    1522 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17243-1006/.minikube/ca.key ...
	I0914 14:36:47.229099    1522 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17243-1006/.minikube/ca.key: {Name:mkbc669c78b9b93a07aa566669e7e92430fec9e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 14:36:47.229219    1522 certs.go:204] generating proxyClientCA CA: /Users/jenkins/minikube-integration/17243-1006/.minikube/proxy-client-ca.key
	I0914 14:36:47.333428    1522 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17243-1006/.minikube/proxy-client-ca.crt ...
	I0914 14:36:47.333432    1522 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17243-1006/.minikube/proxy-client-ca.crt: {Name:mk85d65dc023d08a0f4cb19cc395e69f12c9ed1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 14:36:47.333577    1522 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17243-1006/.minikube/proxy-client-ca.key ...
	I0914 14:36:47.333579    1522 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17243-1006/.minikube/proxy-client-ca.key: {Name:mk62bc08bafeee956e88b9480bac37c2df91bf30 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 14:36:47.333721    1522 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/client.key
	I0914 14:36:47.333730    1522 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/client.crt with IP's: []
	I0914 14:36:47.598337    1522 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/client.crt ...
	I0914 14:36:47.598352    1522 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/client.crt: {Name:mk8ecd4e838807718c7ef97bafd599d3b7fd1a4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 14:36:47.598702    1522 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/client.key ...
	I0914 14:36:47.598710    1522 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/client.key: {Name:mk3960bc5fb536243466f07f9f23680cfa92d826 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 14:36:47.598826    1522 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/apiserver.key.96055969
	I0914 14:36:47.598838    1522 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/apiserver.crt.96055969 with IP's: [192.168.105.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0914 14:36:47.656638    1522 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/apiserver.crt.96055969 ...
	I0914 14:36:47.656642    1522 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/apiserver.crt.96055969: {Name:mk3691ba24392ca70b8d7adb6c837bd5b52dfeeb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 14:36:47.656789    1522 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/apiserver.key.96055969 ...
	I0914 14:36:47.656792    1522 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/apiserver.key.96055969: {Name:mk7619af569a08784491e3a0055c754ead430eca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 14:36:47.656913    1522 certs.go:337] copying /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/apiserver.crt.96055969 -> /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/apiserver.crt
	I0914 14:36:47.657047    1522 certs.go:341] copying /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/apiserver.key.96055969 -> /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/apiserver.key
	I0914 14:36:47.657134    1522 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/proxy-client.key
	I0914 14:36:47.657146    1522 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/proxy-client.crt with IP's: []
	I0914 14:36:47.715161    1522 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/proxy-client.crt ...
	I0914 14:36:47.715165    1522 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/proxy-client.crt: {Name:mk5c5221c842b768f8e9ba880dc08acd610bf8dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 14:36:47.715298    1522 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/proxy-client.key ...
	I0914 14:36:47.715301    1522 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/proxy-client.key: {Name:mk620ca3f197a51ffd017e6711b4bab26fb15d79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 14:36:47.715560    1522 certs.go:437] found cert: /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/Users/jenkins/minikube-integration/17243-1006/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 14:36:47.715594    1522 certs.go:437] found cert: /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/Users/jenkins/minikube-integration/17243-1006/.minikube/certs/ca.pem (1082 bytes)
	I0914 14:36:47.715621    1522 certs.go:437] found cert: /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/Users/jenkins/minikube-integration/17243-1006/.minikube/certs/cert.pem (1123 bytes)
	I0914 14:36:47.715645    1522 certs.go:437] found cert: /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/Users/jenkins/minikube-integration/17243-1006/.minikube/certs/key.pem (1675 bytes)
	I0914 14:36:47.716027    1522 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0914 14:36:47.723894    1522 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0914 14:36:47.731037    1522 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 14:36:47.738379    1522 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0914 14:36:47.745927    1522 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17243-1006/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 14:36:47.752925    1522 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17243-1006/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0914 14:36:47.759542    1522 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17243-1006/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 14:36:47.766602    1522 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17243-1006/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0914 14:36:47.773763    1522 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17243-1006/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 14:36:47.780697    1522 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 14:36:47.786484    1522 ssh_runner.go:195] Run: openssl version
	I0914 14:36:47.788649    1522 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 14:36:47.791615    1522 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 14:36:47.793075    1522 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 14 21:36 /usr/share/ca-certificates/minikubeCA.pem
	I0914 14:36:47.793092    1522 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 14:36:47.794978    1522 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 14:36:47.798423    1522 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0914 14:36:47.799931    1522 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0914 14:36:47.799971    1522 kubeadm.go:404] StartCluster: {Name:addons-388000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.1 ClusterName:addons-388000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 14:36:47.800034    1522 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0914 14:36:47.805504    1522 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 14:36:47.808480    1522 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 14:36:47.811111    1522 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 14:36:47.814398    1522 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 14:36:47.814412    1522 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0914 14:36:47.835210    1522 kubeadm.go:322] [init] Using Kubernetes version: v1.28.1
	I0914 14:36:47.835254    1522 kubeadm.go:322] [preflight] Running pre-flight checks
	I0914 14:36:47.889698    1522 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 14:36:47.889750    1522 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 14:36:47.889794    1522 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0914 14:36:47.952261    1522 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 14:36:47.962464    1522 out.go:204]   - Generating certificates and keys ...
	I0914 14:36:47.962497    1522 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0914 14:36:47.962525    1522 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0914 14:36:48.025951    1522 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0914 14:36:48.134925    1522 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0914 14:36:48.186988    1522 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0914 14:36:48.299178    1522 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0914 14:36:48.429498    1522 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0914 14:36:48.429557    1522 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-388000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0914 14:36:48.510620    1522 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0914 14:36:48.510686    1522 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-388000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0914 14:36:48.631510    1522 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0914 14:36:48.668002    1522 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0914 14:36:48.726941    1522 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0914 14:36:48.726969    1522 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 14:36:48.823035    1522 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 14:36:48.918005    1522 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 14:36:49.052610    1522 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 14:36:49.136045    1522 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 14:36:49.136292    1522 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 14:36:49.138218    1522 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 14:36:49.141449    1522 out.go:204]   - Booting up control plane ...
	I0914 14:36:49.141518    1522 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 14:36:49.141563    1522 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 14:36:49.141596    1522 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 14:36:49.146098    1522 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 14:36:49.146527    1522 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 14:36:49.146584    1522 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0914 14:36:49.235726    1522 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0914 14:36:53.234480    1522 kubeadm.go:322] [apiclient] All control plane components are healthy after 4.002199 seconds
	I0914 14:36:53.234548    1522 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0914 14:36:53.240692    1522 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0914 14:36:53.748795    1522 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0914 14:36:53.748894    1522 kubeadm.go:322] [mark-control-plane] Marking the node addons-388000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0914 14:36:54.253997    1522 kubeadm.go:322] [bootstrap-token] Using token: v43sey.bixdamecwwaf1quf
	I0914 14:36:54.261418    1522 out.go:204]   - Configuring RBAC rules ...
	I0914 14:36:54.261475    1522 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0914 14:36:54.262616    1522 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0914 14:36:54.269041    1522 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0914 14:36:54.270041    1522 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0914 14:36:54.271028    1522 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0914 14:36:54.272209    1522 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0914 14:36:54.276273    1522 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0914 14:36:54.432396    1522 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0914 14:36:54.665469    1522 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0914 14:36:54.665894    1522 kubeadm.go:322] 
	I0914 14:36:54.665937    1522 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0914 14:36:54.665940    1522 kubeadm.go:322] 
	I0914 14:36:54.665992    1522 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0914 14:36:54.665996    1522 kubeadm.go:322] 
	I0914 14:36:54.666008    1522 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0914 14:36:54.666036    1522 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0914 14:36:54.666071    1522 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0914 14:36:54.666074    1522 kubeadm.go:322] 
	I0914 14:36:54.666099    1522 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0914 14:36:54.666101    1522 kubeadm.go:322] 
	I0914 14:36:54.666123    1522 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0914 14:36:54.666126    1522 kubeadm.go:322] 
	I0914 14:36:54.666148    1522 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0914 14:36:54.666182    1522 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0914 14:36:54.666217    1522 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0914 14:36:54.666220    1522 kubeadm.go:322] 
	I0914 14:36:54.666261    1522 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0914 14:36:54.666306    1522 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0914 14:36:54.666308    1522 kubeadm.go:322] 
	I0914 14:36:54.666396    1522 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token v43sey.bixdamecwwaf1quf \
	I0914 14:36:54.666457    1522 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:64a56fc8b6798e154cd69d815313de8f6cddb906dc6093ac20ba940ac7c1d871 \
	I0914 14:36:54.666472    1522 kubeadm.go:322] 	--control-plane 
	I0914 14:36:54.666475    1522 kubeadm.go:322] 
	I0914 14:36:54.666513    1522 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0914 14:36:54.666517    1522 kubeadm.go:322] 
	I0914 14:36:54.666553    1522 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token v43sey.bixdamecwwaf1quf \
	I0914 14:36:54.666621    1522 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:64a56fc8b6798e154cd69d815313de8f6cddb906dc6093ac20ba940ac7c1d871 
	I0914 14:36:54.666672    1522 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 14:36:54.666677    1522 cni.go:84] Creating CNI manager for ""
	I0914 14:36:54.666685    1522 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0914 14:36:54.674398    1522 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 14:36:54.677531    1522 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 14:36:54.681843    1522 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0914 14:36:54.686762    1522 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 14:36:54.686820    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:36:54.686837    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=d7492f2ae2d9b6e62b385ffcd97ebad62c645e82 minikube.k8s.io/name=addons-388000 minikube.k8s.io/updated_at=2023_09_14T14_36_54_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:36:54.745761    1522 ops.go:34] apiserver oom_adj: -16
	I0914 14:36:54.751811    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:36:54.783862    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:36:55.319135    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:36:55.819146    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:36:56.319044    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:36:56.817396    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:36:57.317676    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:36:57.819036    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:36:58.319007    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:36:58.819025    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:36:59.318963    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:36:59.819032    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:37:00.318959    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:37:00.819007    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:37:01.318925    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:37:01.819004    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:37:02.318900    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:37:02.818938    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:37:03.318896    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:37:03.818843    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:37:04.318914    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:37:04.818824    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:37:05.318789    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:37:05.818890    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:37:06.318784    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:37:06.818791    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:37:07.318787    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:37:07.357143    1522 kubeadm.go:1081] duration metric: took 12.670689708s to wait for elevateKubeSystemPrivileges.
	I0914 14:37:07.357158    1522 kubeadm.go:406] StartCluster complete in 19.557685291s
	I0914 14:37:07.357184    1522 settings.go:142] acquiring lock: {Name:mkcccc97e247e7e1b2e556ccc64336c05a92af53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 14:37:07.357360    1522 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/17243-1006/kubeconfig
	I0914 14:37:07.357606    1522 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17243-1006/kubeconfig: {Name:mkeec13fc5a79792669e9cedabfbe21efeb27d3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 14:37:07.357803    1522 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0914 14:37:07.357856    1522 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:true]
	I0914 14:37:07.357902    1522 addons.go:69] Setting volumesnapshots=true in profile "addons-388000"
	I0914 14:37:07.357909    1522 addons.go:231] Setting addon volumesnapshots=true in "addons-388000"
	I0914 14:37:07.357912    1522 addons.go:69] Setting ingress=true in profile "addons-388000"
	I0914 14:37:07.357919    1522 addons.go:231] Setting addon ingress=true in "addons-388000"
	I0914 14:37:07.357926    1522 host.go:66] Checking if "addons-388000" exists ...
	I0914 14:37:07.357934    1522 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-388000"
	I0914 14:37:07.357942    1522 host.go:66] Checking if "addons-388000" exists ...
	I0914 14:37:07.357951    1522 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-388000"
	I0914 14:37:07.357967    1522 host.go:66] Checking if "addons-388000" exists ...
	I0914 14:37:07.357975    1522 addons.go:69] Setting ingress-dns=true in profile "addons-388000"
	I0914 14:37:07.357985    1522 addons.go:69] Setting metrics-server=true in profile "addons-388000"
	I0914 14:37:07.358004    1522 addons.go:231] Setting addon ingress-dns=true in "addons-388000"
	I0914 14:37:07.358008    1522 addons.go:231] Setting addon metrics-server=true in "addons-388000"
	I0914 14:37:07.358046    1522 host.go:66] Checking if "addons-388000" exists ...
	I0914 14:37:07.358051    1522 host.go:66] Checking if "addons-388000" exists ...
	I0914 14:37:07.358066    1522 addons.go:69] Setting inspektor-gadget=true in profile "addons-388000"
	I0914 14:37:07.358074    1522 addons.go:231] Setting addon inspektor-gadget=true in "addons-388000"
	I0914 14:37:07.358086    1522 host.go:66] Checking if "addons-388000" exists ...
	I0914 14:37:07.358133    1522 config.go:182] Loaded profile config "addons-388000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0914 14:37:07.358210    1522 addons.go:69] Setting registry=true in profile "addons-388000"
	I0914 14:37:07.358222    1522 addons.go:231] Setting addon registry=true in "addons-388000"
	I0914 14:37:07.358259    1522 addons.go:69] Setting cloud-spanner=true in profile "addons-388000"
	I0914 14:37:07.358263    1522 addons.go:69] Setting default-storageclass=true in profile "addons-388000"
	I0914 14:37:07.358265    1522 addons.go:231] Setting addon cloud-spanner=true in "addons-388000"
	I0914 14:37:07.358266    1522 host.go:66] Checking if "addons-388000" exists ...
	I0914 14:37:07.358273    1522 addons.go:69] Setting storage-provisioner=true in profile "addons-388000"
	I0914 14:37:07.358276    1522 host.go:66] Checking if "addons-388000" exists ...
	I0914 14:37:07.358278    1522 addons.go:231] Setting addon storage-provisioner=true in "addons-388000"
	I0914 14:37:07.358289    1522 host.go:66] Checking if "addons-388000" exists ...
	W0914 14:37:07.358332    1522 host.go:54] host status for "addons-388000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/monitor: connect: connection refused
	W0914 14:37:07.358339    1522 addons.go:277] "addons-388000" is not running, setting inspektor-gadget=true and skipping enablement (err=<nil>)
	W0914 14:37:07.358450    1522 host.go:54] host status for "addons-388000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/monitor: connect: connection refused
	W0914 14:37:07.358453    1522 addons.go:277] "addons-388000" is not running, setting volumesnapshots=true and skipping enablement (err=<nil>)
	W0914 14:37:07.358483    1522 host.go:54] host status for "addons-388000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/monitor: connect: connection refused
	W0914 14:37:07.358489    1522 addons.go:277] "addons-388000" is not running, setting cloud-spanner=true and skipping enablement (err=<nil>)
	W0914 14:37:07.358257    1522 host.go:54] host status for "addons-388000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/monitor: connect: connection refused
	W0914 14:37:07.358494    1522 addons.go:277] "addons-388000" is not running, setting ingress=true and skipping enablement (err=<nil>)
	I0914 14:37:07.358496    1522 addons.go:467] Verifying addon ingress=true in "addons-388000"
	W0914 14:37:07.358500    1522 host.go:54] host status for "addons-388000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/monitor: connect: connection refused
	W0914 14:37:07.358504    1522 addons.go:277] "addons-388000" is not running, setting storage-provisioner=true and skipping enablement (err=<nil>)
	I0914 14:37:07.363429    1522 out.go:177] * Verifying ingress addon...
	I0914 14:37:07.358269    1522 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-388000"
	W0914 14:37:07.358528    1522 host.go:54] host status for "addons-388000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/monitor: connect: connection refused
	I0914 14:37:07.358271    1522 addons.go:69] Setting gcp-auth=true in profile "addons-388000"
	W0914 14:37:07.358722    1522 host.go:54] host status for "addons-388000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/monitor: connect: connection refused
	W0914 14:37:07.370487    1522 addons.go:277] "addons-388000" is not running, setting ingress-dns=true and skipping enablement (err=<nil>)
	I0914 14:37:07.370528    1522 mustload.go:65] Loading cluster: addons-388000
	W0914 14:37:07.370533    1522 addons.go:277] "addons-388000" is not running, setting registry=true and skipping enablement (err=<nil>)
	I0914 14:37:07.370877    1522 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0914 14:37:07.371899    1522 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-388000" context rescaled to 1 replicas
	I0914 14:37:07.372685    1522 addons.go:231] Setting addon default-storageclass=true in "addons-388000"
	I0914 14:37:07.374445    1522 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0914 14:37:07.377503    1522 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I0914 14:37:07.377530    1522 addons.go:467] Verifying addon registry=true in "addons-388000"
	I0914 14:37:07.377544    1522 host.go:66] Checking if "addons-388000" exists ...
	I0914 14:37:07.377592    1522 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 14:37:07.377611    1522 config.go:182] Loaded profile config "addons-388000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0914 14:37:07.379668    1522 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0914 14:37:07.387418    1522 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0914 14:37:07.384502    1522 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0914 14:37:07.385215    1522 host.go:66] Checking if "addons-388000" exists ...
	I0914 14:37:07.385566    1522 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 14:37:07.399476    1522 out.go:177] * Verifying Kubernetes components...
	I0914 14:37:07.399484    1522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0914 14:37:07.405519    1522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 14:37:07.405539    1522 out.go:177] * Verifying registry addon...
	I0914 14:37:07.409300    1522 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0914 14:37:07.413413    1522 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0914 14:37:07.409310    1522 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/id_rsa Username:docker}
	I0914 14:37:07.409318    1522 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/id_rsa Username:docker}
	I0914 14:37:07.413772    1522 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0914 14:37:07.421473    1522 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0914 14:37:07.425266    1522 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 14:37:07.434436    1522 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0914 14:37:07.437375    1522 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=registry
	I0914 14:37:07.438456    1522 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0914 14:37:07.450436    1522 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0914 14:37:07.460462    1522 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0914 14:37:07.463476    1522 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0914 14:37:07.463485    1522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0914 14:37:07.463494    1522 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/id_rsa Username:docker}
	I0914 14:37:07.497507    1522 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0914 14:37:07.497516    1522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0914 14:37:07.503780    1522 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0914 14:37:07.503787    1522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0914 14:37:07.509075    1522 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 14:37:07.509081    1522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0914 14:37:07.516870    1522 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 14:37:07.522898    1522 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 14:37:07.539508    1522 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0914 14:37:07.539521    1522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0914 14:37:07.591865    1522 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0914 14:37:07.591879    1522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0914 14:37:07.635732    1522 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0914 14:37:07.635742    1522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0914 14:37:07.644322    1522 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0914 14:37:07.644333    1522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0914 14:37:07.649557    1522 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0914 14:37:07.649568    1522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0914 14:37:07.681313    1522 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0914 14:37:07.681325    1522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0914 14:37:07.685931    1522 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0914 14:37:07.685936    1522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0914 14:37:07.690914    1522 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0914 14:37:07.690921    1522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0914 14:37:07.695920    1522 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0914 14:37:07.695926    1522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0914 14:37:07.700851    1522 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0914 14:37:07.700856    1522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0914 14:37:07.705677    1522 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0914 14:37:08.213892    1522 start.go:917] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I0914 14:37:08.214323    1522 node_ready.go:35] waiting up to 6m0s for node "addons-388000" to be "Ready" ...
	I0914 14:37:08.215929    1522 node_ready.go:49] node "addons-388000" has status "Ready":"True"
	I0914 14:37:08.215948    1522 node_ready.go:38] duration metric: took 1.599458ms waiting for node "addons-388000" to be "Ready" ...
	I0914 14:37:08.215953    1522 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 14:37:08.218780    1522 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-6php8" in "kube-system" namespace to be "Ready" ...
	I0914 14:37:08.378405    1522 addons.go:467] Verifying addon metrics-server=true in "addons-388000"
	I0914 14:37:08.878056    1522 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.172383083s)
	I0914 14:37:08.878074    1522 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-388000"
	I0914 14:37:08.882346    1522 out.go:177] * Verifying csi-hostpath-driver addon...
	I0914 14:37:08.892719    1522 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0914 14:37:08.895508    1522 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0914 14:37:08.895515    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:08.901644    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:09.404389    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:09.734233    1522 pod_ready.go:97] error getting pod "coredns-5dd5756b68-6php8" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-6php8" not found
	I0914 14:37:09.734244    1522 pod_ready.go:81] duration metric: took 1.515495542s waiting for pod "coredns-5dd5756b68-6php8" in "kube-system" namespace to be "Ready" ...
	E0914 14:37:09.734250    1522 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-6php8" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-6php8" not found
	I0914 14:37:09.734253    1522 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-psn28" in "kube-system" namespace to be "Ready" ...
	I0914 14:37:09.736576    1522 pod_ready.go:92] pod "coredns-5dd5756b68-psn28" in "kube-system" namespace has status "Ready":"True"
	I0914 14:37:09.736583    1522 pod_ready.go:81] duration metric: took 2.327542ms waiting for pod "coredns-5dd5756b68-psn28" in "kube-system" namespace to be "Ready" ...
	I0914 14:37:09.736588    1522 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-388000" in "kube-system" namespace to be "Ready" ...
	I0914 14:37:09.739033    1522 pod_ready.go:92] pod "etcd-addons-388000" in "kube-system" namespace has status "Ready":"True"
	I0914 14:37:09.739038    1522 pod_ready.go:81] duration metric: took 2.447792ms waiting for pod "etcd-addons-388000" in "kube-system" namespace to be "Ready" ...
	I0914 14:37:09.739041    1522 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-388000" in "kube-system" namespace to be "Ready" ...
	I0914 14:37:09.741595    1522 pod_ready.go:92] pod "kube-apiserver-addons-388000" in "kube-system" namespace has status "Ready":"True"
	I0914 14:37:09.741601    1522 pod_ready.go:81] duration metric: took 2.556083ms waiting for pod "kube-apiserver-addons-388000" in "kube-system" namespace to be "Ready" ...
	I0914 14:37:09.741605    1522 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-388000" in "kube-system" namespace to be "Ready" ...
	I0914 14:37:09.904411    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:10.016583    1522 pod_ready.go:92] pod "kube-controller-manager-addons-388000" in "kube-system" namespace has status "Ready":"True"
	I0914 14:37:10.016591    1522 pod_ready.go:81] duration metric: took 274.98975ms waiting for pod "kube-controller-manager-addons-388000" in "kube-system" namespace to be "Ready" ...
	I0914 14:37:10.016595    1522 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8pbsf" in "kube-system" namespace to be "Ready" ...
	I0914 14:37:10.404994    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:10.417030    1522 pod_ready.go:92] pod "kube-proxy-8pbsf" in "kube-system" namespace has status "Ready":"True"
	I0914 14:37:10.417036    1522 pod_ready.go:81] duration metric: took 400.447833ms waiting for pod "kube-proxy-8pbsf" in "kube-system" namespace to be "Ready" ...
	I0914 14:37:10.417041    1522 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-388000" in "kube-system" namespace to be "Ready" ...
	I0914 14:37:10.816814    1522 pod_ready.go:92] pod "kube-scheduler-addons-388000" in "kube-system" namespace has status "Ready":"True"
	I0914 14:37:10.816823    1522 pod_ready.go:81] duration metric: took 399.789417ms waiting for pod "kube-scheduler-addons-388000" in "kube-system" namespace to be "Ready" ...
	I0914 14:37:10.816827    1522 pod_ready.go:38] duration metric: took 2.600935083s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 14:37:10.816835    1522 api_server.go:52] waiting for apiserver process to appear ...
	I0914 14:37:10.816886    1522 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 14:37:10.821727    1522 api_server.go:72] duration metric: took 3.437324417s to wait for apiserver process to appear ...
	I0914 14:37:10.821733    1522 api_server.go:88] waiting for apiserver healthz status ...
	I0914 14:37:10.821738    1522 api_server.go:253] Checking apiserver healthz at https://192.168.105.2:8443/healthz ...
	I0914 14:37:10.825342    1522 api_server.go:279] https://192.168.105.2:8443/healthz returned 200:
	ok
	I0914 14:37:10.826107    1522 api_server.go:141] control plane version: v1.28.1
	I0914 14:37:10.826114    1522 api_server.go:131] duration metric: took 4.378333ms to wait for apiserver health ...
	I0914 14:37:10.826117    1522 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 14:37:10.904363    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:11.018876    1522 system_pods.go:59] 10 kube-system pods found
	I0914 14:37:11.018886    1522 system_pods.go:61] "coredns-5dd5756b68-psn28" [50c0e128-9a93-456c-83af-dfbcda64eaa4] Running
	I0914 14:37:11.018891    1522 system_pods.go:61] "csi-hostpath-attacher-0" [29be2dba-12b9-4442-8c83-8d24fd054a90] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0914 14:37:11.018894    1522 system_pods.go:61] "csi-hostpath-resizer-0" [11fcc7a2-d176-442f-9cd6-04668da8d423] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0914 14:37:11.018898    1522 system_pods.go:61] "csi-hostpathplugin-b5k2m" [aa03259b-6f1a-4537-95f6-47e8cf8fcc39] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0914 14:37:11.018909    1522 system_pods.go:61] "etcd-addons-388000" [b91a9e99-fb10-4340-977e-536225df8415] Running
	I0914 14:37:11.018914    1522 system_pods.go:61] "kube-apiserver-addons-388000" [43fed39d-32f3-4b45-b43c-d9918758a66c] Running
	I0914 14:37:11.018917    1522 system_pods.go:61] "kube-controller-manager-addons-388000" [31eb0c68-03ca-4907-921b-14ccef970edf] Running
	I0914 14:37:11.018920    1522 system_pods.go:61] "kube-proxy-8pbsf" [e9d3ab50-7594-4360-8226-d37e954aca6e] Running
	I0914 14:37:11.018923    1522 system_pods.go:61] "kube-scheduler-addons-388000" [d931a34d-1c14-4544-80cd-ce847a1f1af8] Running
	I0914 14:37:11.018927    1522 system_pods.go:61] "metrics-server-7c66d45ddc-dtj78" [7b539063-f45b-4a15-97e7-6713ea57e519] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 14:37:11.018932    1522 system_pods.go:74] duration metric: took 192.817125ms to wait for pod list to return data ...
	I0914 14:37:11.018935    1522 default_sa.go:34] waiting for default service account to be created ...
	I0914 14:37:11.216117    1522 default_sa.go:45] found service account: "default"
	I0914 14:37:11.216127    1522 default_sa.go:55] duration metric: took 197.1925ms for default service account to be created ...
	I0914 14:37:11.216130    1522 system_pods.go:116] waiting for k8s-apps to be running ...
	I0914 14:37:11.404125    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:11.419144    1522 system_pods.go:86] 10 kube-system pods found
	I0914 14:37:11.419151    1522 system_pods.go:89] "coredns-5dd5756b68-psn28" [50c0e128-9a93-456c-83af-dfbcda64eaa4] Running
	I0914 14:37:11.419155    1522 system_pods.go:89] "csi-hostpath-attacher-0" [29be2dba-12b9-4442-8c83-8d24fd054a90] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0914 14:37:11.419158    1522 system_pods.go:89] "csi-hostpath-resizer-0" [11fcc7a2-d176-442f-9cd6-04668da8d423] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0914 14:37:11.419163    1522 system_pods.go:89] "csi-hostpathplugin-b5k2m" [aa03259b-6f1a-4537-95f6-47e8cf8fcc39] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0914 14:37:11.419167    1522 system_pods.go:89] "etcd-addons-388000" [b91a9e99-fb10-4340-977e-536225df8415] Running
	I0914 14:37:11.419169    1522 system_pods.go:89] "kube-apiserver-addons-388000" [43fed39d-32f3-4b45-b43c-d9918758a66c] Running
	I0914 14:37:11.419176    1522 system_pods.go:89] "kube-controller-manager-addons-388000" [31eb0c68-03ca-4907-921b-14ccef970edf] Running
	I0914 14:37:11.419178    1522 system_pods.go:89] "kube-proxy-8pbsf" [e9d3ab50-7594-4360-8226-d37e954aca6e] Running
	I0914 14:37:11.419180    1522 system_pods.go:89] "kube-scheduler-addons-388000" [d931a34d-1c14-4544-80cd-ce847a1f1af8] Running
	I0914 14:37:11.419183    1522 system_pods.go:89] "metrics-server-7c66d45ddc-dtj78" [7b539063-f45b-4a15-97e7-6713ea57e519] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 14:37:11.419189    1522 system_pods.go:126] duration metric: took 203.059ms to wait for k8s-apps to be running ...
	I0914 14:37:11.419193    1522 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 14:37:11.419242    1522 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 14:37:11.424702    1522 system_svc.go:56] duration metric: took 5.506625ms WaitForService to wait for kubelet.
	I0914 14:37:11.424708    1522 kubeadm.go:581] duration metric: took 4.040322208s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0914 14:37:11.424718    1522 node_conditions.go:102] verifying NodePressure condition ...
	I0914 14:37:11.616510    1522 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0914 14:37:11.616524    1522 node_conditions.go:123] node cpu capacity is 2
	I0914 14:37:11.616531    1522 node_conditions.go:105] duration metric: took 191.81375ms to run NodePressure ...
	I0914 14:37:11.616536    1522 start.go:228] waiting for startup goroutines ...
	I0914 14:37:11.904062    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:12.404356    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:12.904283    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:13.404719    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:13.905195    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:14.010940    1522 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0914 14:37:14.010958    1522 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/id_rsa Username:docker}
	I0914 14:37:14.050416    1522 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0914 14:37:14.056158    1522 addons.go:231] Setting addon gcp-auth=true in "addons-388000"
	I0914 14:37:14.056180    1522 host.go:66] Checking if "addons-388000" exists ...
	I0914 14:37:14.056914    1522 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0914 14:37:14.056921    1522 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/id_rsa Username:docker}
	I0914 14:37:14.098984    1522 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0914 14:37:14.102963    1522 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0914 14:37:14.106843    1522 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0914 14:37:14.106851    1522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0914 14:37:14.112250    1522 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0914 14:37:14.112259    1522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0914 14:37:14.117057    1522 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0914 14:37:14.117063    1522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5412 bytes)
	I0914 14:37:14.122524    1522 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0914 14:37:14.407542    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:14.453711    1522 addons.go:467] Verifying addon gcp-auth=true in "addons-388000"
	I0914 14:37:14.458827    1522 out.go:177] * Verifying gcp-auth addon...
	I0914 14:37:14.469206    1522 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0914 14:37:14.473873    1522 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0914 14:37:14.473883    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:14.477552    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:14.905449    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:14.981028    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:15.404241    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:15.481017    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:15.904406    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:15.981050    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:16.404161    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:16.481356    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:16.904348    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:16.980852    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:17.404432    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:17.480937    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:17.904061    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:17.980969    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:18.404491    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:18.481031    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:18.904020    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:18.981054    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:19.405323    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:19.480019    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:19.904276    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:19.980839    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:20.404204    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:20.481250    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:20.904037    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:20.981407    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:21.404239    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:21.481248    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:21.904261    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:21.981109    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:22.405094    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:22.481049    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:22.904407    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:22.981227    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:23.404066    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:23.480779    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:23.904000    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:23.980955    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:24.404182    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:24.480903    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:24.904034    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:24.980896    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:25.403993    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:25.480949    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:25.903717    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:25.981591    1522 kapi.go:107] duration metric: took 11.512675166s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0914 14:37:25.985811    1522 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-388000 cluster.
	I0914 14:37:25.990747    1522 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0914 14:37:25.993661    1522 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0914 14:37:26.404089    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:26.904132    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:27.405664    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:27.903941    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:28.403884    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:28.903901    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:29.404487    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:29.903852    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:30.404685    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:30.903890    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:31.403753    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:31.903926    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:32.404318    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:32.903835    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:33.403834    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:33.903687    1522 kapi.go:107] duration metric: took 25.011601375s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0914 14:43:07.370409    1522 kapi.go:107] duration metric: took 6m0.008648916s to wait for app.kubernetes.io/name=ingress-nginx ...
	W0914 14:43:07.370479    1522 out.go:239] ! Enabling 'ingress' returned an error: running callbacks: [waiting for app.kubernetes.io/name=ingress-nginx pods: context deadline exceeded]
	I0914 14:43:07.418192    1522 kapi.go:107] duration metric: took 6m0.013534334s to wait for kubernetes.io/minikube-addons=registry ...
	W0914 14:43:07.418227    1522 out.go:239] ! Enabling 'registry' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=registry pods: context deadline exceeded]
	I0914 14:43:07.425587    1522 out.go:177] * Enabled addons: inspektor-gadget, volumesnapshots, cloud-spanner, storage-provisioner, ingress-dns, default-storageclass, metrics-server, gcp-auth, csi-hostpath-driver
	I0914 14:43:07.433636    1522 addons.go:502] enable addons completed in 6m0.084906709s: enabled=[inspektor-gadget volumesnapshots cloud-spanner storage-provisioner ingress-dns default-storageclass metrics-server gcp-auth csi-hostpath-driver]
	I0914 14:43:07.433650    1522 start.go:233] waiting for cluster config update ...
	I0914 14:43:07.433664    1522 start.go:242] writing updated cluster config ...
	I0914 14:43:07.433996    1522 ssh_runner.go:195] Run: rm -f paused
	I0914 14:43:07.464084    1522 start.go:600] kubectl: 1.27.2, cluster: 1.28.1 (minor skew: 1)
	I0914 14:43:07.467672    1522 out.go:177] * Done! kubectl is now configured to use "addons-388000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Thu 2023-09-14 21:36:36 UTC, ends at Thu 2023-09-14 22:03:13 UTC. --
	Sep 14 21:55:14 addons-388000 dockerd[1156]: time="2023-09-14T21:55:14.667931603Z" level=info msg="ignoring event" container=40cdfd9e591d6d222ceba6780cb17bfc909ac2ecbb55e2501c67b4a00e7499a9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 14 21:55:14 addons-388000 dockerd[1162]: time="2023-09-14T21:55:14.668071849Z" level=info msg="shim disconnected" id=40cdfd9e591d6d222ceba6780cb17bfc909ac2ecbb55e2501c67b4a00e7499a9 namespace=moby
	Sep 14 21:55:14 addons-388000 dockerd[1162]: time="2023-09-14T21:55:14.668117056Z" level=warning msg="cleaning up after shim disconnected" id=40cdfd9e591d6d222ceba6780cb17bfc909ac2ecbb55e2501c67b4a00e7499a9 namespace=moby
	Sep 14 21:55:14 addons-388000 dockerd[1162]: time="2023-09-14T21:55:14.668121222Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 14 21:56:47 addons-388000 dockerd[1162]: time="2023-09-14T21:56:47.096101665Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 14 21:56:47 addons-388000 dockerd[1162]: time="2023-09-14T21:56:47.096128415Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 14 21:56:47 addons-388000 dockerd[1162]: time="2023-09-14T21:56:47.096134790Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 14 21:56:47 addons-388000 dockerd[1162]: time="2023-09-14T21:56:47.096138914Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 14 21:56:47 addons-388000 cri-dockerd[1056]: time="2023-09-14T21:56:47Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2e3fb7677fbbb72b513ff9c738d4b4347a2fe388870c97fd2b8449bb01ea2929/resolv.conf as [nameserver 10.96.0.10 search headlamp.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Sep 14 21:56:47 addons-388000 dockerd[1156]: time="2023-09-14T21:56:47.440443205Z" level=warning msg="reference for unknown type: " digest="sha256:498ea22dc5acadaa4015e7a50335d21fdce45d9e8f1f8adf29c2777da4182f98" remote="ghcr.io/headlamp-k8s/headlamp@sha256:498ea22dc5acadaa4015e7a50335d21fdce45d9e8f1f8adf29c2777da4182f98"
	Sep 14 21:56:52 addons-388000 cri-dockerd[1056]: time="2023-09-14T21:56:52Z" level=info msg="Stop pulling image ghcr.io/headlamp-k8s/headlamp:v0.19.0@sha256:498ea22dc5acadaa4015e7a50335d21fdce45d9e8f1f8adf29c2777da4182f98: Status: Downloaded newer image for ghcr.io/headlamp-k8s/headlamp@sha256:498ea22dc5acadaa4015e7a50335d21fdce45d9e8f1f8adf29c2777da4182f98"
	Sep 14 21:56:52 addons-388000 dockerd[1162]: time="2023-09-14T21:56:52.290106170Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 14 21:56:52 addons-388000 dockerd[1162]: time="2023-09-14T21:56:52.290133544Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 14 21:56:52 addons-388000 dockerd[1162]: time="2023-09-14T21:56:52.290142794Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 14 21:56:52 addons-388000 dockerd[1162]: time="2023-09-14T21:56:52.290162043Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 14 21:57:00 addons-388000 dockerd[1162]: time="2023-09-14T21:57:00.415101798Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 14 21:57:00 addons-388000 dockerd[1162]: time="2023-09-14T21:57:00.415132297Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 14 21:57:00 addons-388000 dockerd[1162]: time="2023-09-14T21:57:00.415325584Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 14 21:57:00 addons-388000 dockerd[1162]: time="2023-09-14T21:57:00.415338375Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 14 21:57:00 addons-388000 cri-dockerd[1056]: time="2023-09-14T21:57:00Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f19d008bf96077bec263ce950e5d45b2ca84f877b5b6a5cc94a2c2393f816d18/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Sep 14 21:57:05 addons-388000 cri-dockerd[1056]: time="2023-09-14T21:57:05Z" level=info msg="Stop pulling image docker.io/nginx:latest: Status: Downloaded newer image for nginx:latest"
	Sep 14 21:57:05 addons-388000 dockerd[1162]: time="2023-09-14T21:57:05.390694560Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 14 21:57:05 addons-388000 dockerd[1162]: time="2023-09-14T21:57:05.390723310Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 14 21:57:05 addons-388000 dockerd[1162]: time="2023-09-14T21:57:05.390927346Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 14 21:57:05 addons-388000 dockerd[1162]: time="2023-09-14T21:57:05.390933554Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID
	0e57ed777e3e7       nginx@sha256:6926dd802f40e5e7257fded83e0d8030039642e4e10c4a98a6478e9c6fe06153                                                                6 minutes ago       Running             task-pv-container                        0                   f19d008bf9607
	ab77278ca5874       ghcr.io/headlamp-k8s/headlamp@sha256:498ea22dc5acadaa4015e7a50335d21fdce45d9e8f1f8adf29c2777da4182f98                                        6 minutes ago       Running             headlamp                                 0                   2e3fb7677fbbb
	c6e7158ec87e6       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          25 minutes ago      Running             csi-snapshotter                          0                   23a9864c5e7a2
	8fbd96f503108       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          25 minutes ago      Running             csi-provisioner                          0                   23a9864c5e7a2
	5a28f3666ec4d       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            25 minutes ago      Running             liveness-probe                           0                   23a9864c5e7a2
	4a515f3dbd90e       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           25 minutes ago      Running             hostpath                                 0                   23a9864c5e7a2
	726bdbe627b06       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf                                 25 minutes ago      Running             gcp-auth                                 0                   039c490b8ce95
	c5e816aa3fb60       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                25 minutes ago      Running             node-driver-registrar                    0                   23a9864c5e7a2
	0574ef72c784a       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              25 minutes ago      Running             csi-resizer                              0                   928188ebbbe5c
	0af4f9c858980       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   25 minutes ago      Running             csi-external-health-monitor-controller   0                   23a9864c5e7a2
	9a3fe3bf72dd7       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             25 minutes ago      Running             csi-attacher                             0                   aec96cfd028be
	1f519e69776da       97e04611ad434                                                                                                                                26 minutes ago      Running             coredns                                  0                   6b82b02e01da4
	c36ca5fc76214       812f5241df7fd                                                                                                                                26 minutes ago      Running             kube-proxy                               0                   24118a5be8efa
	af45960dc2d7c       b4a5a57e99492                                                                                                                                26 minutes ago      Running             kube-scheduler                           0                   6dde63050aa99
	39f78945ed576       b29fb62480892                                                                                                                                26 minutes ago      Running             kube-apiserver                           0                   a02ab403a50ec
	f2717f532e595       8b6e1980b7584                                                                                                                                26 minutes ago      Running             kube-controller-manager                  0                   834af4f99b3bc
	5a63d0e8296f4       9cdd6470f48c8                                                                                                                                26 minutes ago      Running             etcd                                     0                   b2289ff5c077b
	
	* 
	* ==> coredns [1f519e69776d] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	[INFO] Reloading complete
	
	* 
	* ==> describe nodes <==
	* Name:               addons-388000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-388000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d7492f2ae2d9b6e62b385ffcd97ebad62c645e82
	                    minikube.k8s.io/name=addons-388000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_14T14_36_54_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-388000
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-388000"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 14 Sep 2023 21:36:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-388000
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 14 Sep 2023 22:03:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 14 Sep 2023 22:02:37 +0000   Thu, 14 Sep 2023 21:36:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 14 Sep 2023 22:02:37 +0000   Thu, 14 Sep 2023 21:36:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 14 Sep 2023 22:02:37 +0000   Thu, 14 Sep 2023 21:36:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 14 Sep 2023 22:02:37 +0000   Thu, 14 Sep 2023 21:36:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.2
	  Hostname:    addons-388000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	System Info:
	  Machine ID:                 ca8cf67d46214b1fbc59c14cf3d2d66f
	  System UUID:                ca8cf67d46214b1fbc59c14cf3d2d66f
	  Boot ID:                    386c1075-3226-461a-ab43-e16ad465a6c4
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.6
	  Kubelet Version:            v1.28.1
	  Kube-Proxy Version:         v1.28.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     task-pv-pod                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m14s
	  gcp-auth                    gcp-auth-d4c87556c-pjjjl                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	  headlamp                    headlamp-699c48fb74-9lhdj                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m28s
	  kube-system                 coredns-5dd5756b68-psn28                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     26m
	  kube-system                 csi-hostpath-attacher-0                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	  kube-system                 csi-hostpath-resizer-0                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	  kube-system                 csi-hostpathplugin-b5k2m                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	  kube-system                 etcd-addons-388000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         26m
	  kube-system                 kube-apiserver-addons-388000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	  kube-system                 kube-controller-manager-addons-388000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	  kube-system                 kube-proxy-8pbsf                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	  kube-system                 kube-scheduler-addons-388000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 26m   kube-proxy       
	  Normal  Starting                 26m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  26m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  26m   kubelet          Node addons-388000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    26m   kubelet          Node addons-388000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     26m   kubelet          Node addons-388000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                26m   kubelet          Node addons-388000 status is now: NodeReady
	  Normal  RegisteredNode           26m   node-controller  Node addons-388000 event: Registered Node addons-388000 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000000] KASLR disabled due to lack of seed
	[  +0.645091] EINJ: EINJ table not found.
	[  +0.506039] systemd-fstab-generator[117]: Ignoring "noauto" for root device
	[  +0.043466] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000824] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +4.211816] systemd-fstab-generator[481]: Ignoring "noauto" for root device
	[  +0.087452] systemd-fstab-generator[492]: Ignoring "noauto" for root device
	[  +0.529791] systemd-fstab-generator[754]: Ignoring "noauto" for root device
	[  +0.178247] systemd-fstab-generator[791]: Ignoring "noauto" for root device
	[  +0.078699] systemd-fstab-generator[802]: Ignoring "noauto" for root device
	[  +0.082696] systemd-fstab-generator[815]: Ignoring "noauto" for root device
	[  +1.243164] systemd-fstab-generator[973]: Ignoring "noauto" for root device
	[  +0.079535] systemd-fstab-generator[984]: Ignoring "noauto" for root device
	[  +0.081510] systemd-fstab-generator[995]: Ignoring "noauto" for root device
	[  +0.082103] systemd-fstab-generator[1006]: Ignoring "noauto" for root device
	[  +0.084560] systemd-fstab-generator[1049]: Ignoring "noauto" for root device
	[  +2.579762] systemd-fstab-generator[1149]: Ignoring "noauto" for root device
	[  +2.146558] kauditd_printk_skb: 53 callbacks suppressed
	[  +2.177617] systemd-fstab-generator[1466]: Ignoring "noauto" for root device
	[  +5.135787] systemd-fstab-generator[2333]: Ignoring "noauto" for root device
	[Sep14 21:37] kauditd_printk_skb: 41 callbacks suppressed
	[  +1.224924] TCP: eth0: Driver has suspect GRO implementation, TCP performance may be compromised.
	[  +5.069347] kauditd_printk_skb: 47 callbacks suppressed
	[  +8.062309] kauditd_printk_skb: 6 callbacks suppressed
	[  +6.104989] kauditd_printk_skb: 5 callbacks suppressed
	
	* 
	* ==> etcd [5a63d0e8296f] <==
	* {"level":"info","ts":"2023-09-14T21:36:51.511114Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 became leader at term 2"}
	{"level":"info","ts":"2023-09-14T21:36:51.511137Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c46d288d2fcb0590 elected leader c46d288d2fcb0590 at term 2"}
	{"level":"info","ts":"2023-09-14T21:36:51.511827Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"c46d288d2fcb0590","local-member-attributes":"{Name:addons-388000 ClientURLs:[https://192.168.105.2:2379]}","request-path":"/0/members/c46d288d2fcb0590/attributes","cluster-id":"6e03e7863b4f9c54","publish-timeout":"7s"}
	{"level":"info","ts":"2023-09-14T21:36:51.511953Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-14T21:36:51.512015Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-14T21:36:51.512221Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-14T21:36:51.512251Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-09-14T21:36:51.512273Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6e03e7863b4f9c54","local-member-id":"c46d288d2fcb0590","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-14T21:36:51.512322Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-14T21:36:51.512345Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-14T21:36:51.512368Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-14T21:36:51.512743Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.2:2379"}
	{"level":"info","ts":"2023-09-14T21:36:51.513582Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-09-14T21:46:51.09248Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":801}
	{"level":"info","ts":"2023-09-14T21:46:51.094243Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":801,"took":"1.342212ms","hash":1083412012}
	{"level":"info","ts":"2023-09-14T21:46:51.094259Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1083412012,"revision":801,"compact-revision":-1}
	{"level":"info","ts":"2023-09-14T21:51:51.097381Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":951}
	{"level":"info","ts":"2023-09-14T21:51:51.097919Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":951,"took":"269.454µs","hash":1387439011}
	{"level":"info","ts":"2023-09-14T21:51:51.097932Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1387439011,"revision":951,"compact-revision":801}
	{"level":"info","ts":"2023-09-14T21:56:51.100187Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1102}
	{"level":"info","ts":"2023-09-14T21:56:51.100605Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1102,"took":"247.994µs","hash":2959214090}
	{"level":"info","ts":"2023-09-14T21:56:51.10062Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2959214090,"revision":1102,"compact-revision":951}
	{"level":"info","ts":"2023-09-14T22:01:51.104012Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1309}
	{"level":"info","ts":"2023-09-14T22:01:51.104575Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1309,"took":"373.243µs","hash":3004076975}
	{"level":"info","ts":"2023-09-14T22:01:51.104589Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3004076975,"revision":1309,"compact-revision":1102}
	
	* 
	* ==> gcp-auth [726bdbe627b0] <==
	* 2023/09/14 21:37:25 GCP Auth Webhook started!
	2023/09/14 21:56:46 Ready to marshal response ...
	2023/09/14 21:56:46 Ready to write response ...
	2023/09/14 21:56:46 Ready to marshal response ...
	2023/09/14 21:56:46 Ready to write response ...
	2023/09/14 21:56:46 Ready to marshal response ...
	2023/09/14 21:56:46 Ready to write response ...
	2023/09/14 21:57:00 Ready to marshal response ...
	2023/09/14 21:57:00 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  22:03:14 up 26 min,  0 users,  load average: 0.10, 0.23, 0.21
	Linux addons-388000 5.10.57 #1 SMP PREEMPT Wed Sep 13 19:05:16 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [39f78945ed57] <==
	* W0914 21:53:00.844205       1 watcher.go:245] watch chan error: etcdserver: mvcc: required revision has been compacted
	I0914 21:53:51.695037       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0914 21:54:51.695833       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0914 21:55:13.478412       1 controller.go:159] removing "v1beta1.metrics.k8s.io" from AggregationController failed with: resource not found
	E0914 21:55:19.745518       1 handler_proxy.go:137] error resolving kube-system/metrics-server: service "metrics-server" not found
	W0914 21:55:19.745550       1 handler_proxy.go:93] no RequestInfo found in the context
	E0914 21:55:19.745571       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0914 21:55:19.745579       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0914 21:56:19.746684       1 handler_proxy.go:137] error resolving kube-system/metrics-server: service "metrics-server" not found
	W0914 21:56:19.746702       1 handler_proxy.go:93] no RequestInfo found in the context
	E0914 21:56:19.746726       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0914 21:56:19.746731       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0914 21:56:46.705370       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.97.191.186"}
	E0914 21:58:19.747771       1 handler_proxy.go:137] error resolving kube-system/metrics-server: service "metrics-server" not found
	W0914 21:58:19.747825       1 handler_proxy.go:93] no RequestInfo found in the context
	E0914 21:58:19.747852       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0914 21:58:19.747863       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0914 22:02:19.748895       1 handler_proxy.go:137] error resolving kube-system/metrics-server: service "metrics-server" not found
	W0914 22:02:19.748911       1 handler_proxy.go:93] no RequestInfo found in the context
	E0914 22:02:19.748937       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0914 22:02:19.748941       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [f2717f532e59] <==
	* I0914 21:37:25.770929       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="gcp-auth/gcp-auth-d4c87556c" duration="2.310416ms"
	I0914 21:37:25.771746       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="gcp-auth/gcp-auth-d4c87556c" duration="11.791µs"
	I0914 21:37:53.005858       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0914 21:37:53.014644       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0914 21:37:54.003849       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0914 21:37:54.024141       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0914 21:55:13.485728       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-7c66d45ddc" duration="2.166µs"
	I0914 21:56:46.714052       1 event.go:307] "Event occurred" object="headlamp/headlamp" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set headlamp-699c48fb74 to 1"
	I0914 21:56:46.722225       1 event.go:307] "Event occurred" object="headlamp/headlamp-699c48fb74" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"headlamp-699c48fb74-\" is forbidden: error looking up service account headlamp/headlamp: serviceaccount \"headlamp\" not found"
	I0914 21:56:46.724298       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-699c48fb74" duration="6.644453ms"
	E0914 21:56:46.724312       1 replica_set.go:557] sync "headlamp/headlamp-699c48fb74" failed with pods "headlamp-699c48fb74-" is forbidden: error looking up service account headlamp/headlamp: serviceaccount "headlamp" not found
	I0914 21:56:46.728303       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-699c48fb74" duration="3.955148ms"
	E0914 21:56:46.728333       1 replica_set.go:557] sync "headlamp/headlamp-699c48fb74" failed with pods "headlamp-699c48fb74-" is forbidden: error looking up service account headlamp/headlamp: serviceaccount "headlamp" not found
	I0914 21:56:46.728354       1 event.go:307] "Event occurred" object="headlamp/headlamp-699c48fb74" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"headlamp-699c48fb74-\" is forbidden: error looking up service account headlamp/headlamp: serviceaccount \"headlamp\" not found"
	I0914 21:56:46.735153       1 event.go:307] "Event occurred" object="headlamp/headlamp-699c48fb74" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: headlamp-699c48fb74-9lhdj"
	I0914 21:56:46.737941       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-699c48fb74" duration="8.182122ms"
	I0914 21:56:46.759020       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-699c48fb74" duration="21.052915ms"
	I0914 21:56:46.759228       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-699c48fb74" duration="15.25µs"
	I0914 21:56:46.768831       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-699c48fb74" duration="28.957µs"
	I0914 21:56:52.626631       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-699c48fb74" duration="53.29µs"
	I0914 21:56:52.641632       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-699c48fb74" duration="3.462287ms"
	I0914 21:56:52.641923       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-699c48fb74" duration="12.541µs"
	I0914 21:56:57.882660       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0914 21:56:57.882836       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0914 21:56:59.545001       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	
	* 
	* ==> kube-proxy [c36ca5fc7621] <==
	* I0914 21:37:08.522854       1 server_others.go:69] "Using iptables proxy"
	I0914 21:37:08.529066       1 node.go:141] Successfully retrieved node IP: 192.168.105.2
	I0914 21:37:08.587870       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0914 21:37:08.587883       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0914 21:37:08.588459       1 server_others.go:152] "Using iptables Proxier"
	I0914 21:37:08.588486       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0914 21:37:08.588572       1 server.go:846] "Version info" version="v1.28.1"
	I0914 21:37:08.588578       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 21:37:08.589296       1 config.go:188] "Starting service config controller"
	I0914 21:37:08.589305       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0914 21:37:08.589315       1 config.go:97] "Starting endpoint slice config controller"
	I0914 21:37:08.589317       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0914 21:37:08.589522       1 config.go:315] "Starting node config controller"
	I0914 21:37:08.589524       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0914 21:37:08.690794       1 shared_informer.go:318] Caches are synced for node config
	I0914 21:37:08.690821       1 shared_informer.go:318] Caches are synced for service config
	I0914 21:37:08.690838       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [af45960dc2d7] <==
	* E0914 21:36:52.199210       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0914 21:36:52.199206       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0914 21:36:52.199236       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0914 21:36:52.199258       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0914 21:36:52.199265       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0914 21:36:52.199278       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0914 21:36:52.199281       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0914 21:36:52.199189       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0914 21:36:52.199288       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0914 21:36:52.199260       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0914 21:36:52.199247       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0914 21:36:52.199323       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0914 21:36:53.095318       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0914 21:36:53.095337       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0914 21:36:53.142146       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0914 21:36:53.142164       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0914 21:36:53.158912       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0914 21:36:53.159021       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0914 21:36:53.162940       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0914 21:36:53.163031       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0914 21:36:53.206403       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0914 21:36:53.206481       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0914 21:36:53.209535       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0914 21:36:53.209549       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0914 21:36:53.797539       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Thu 2023-09-14 21:36:36 UTC, ends at Thu 2023-09-14 22:03:14 UTC. --
	Sep 14 21:57:54 addons-388000 kubelet[2339]: E0914 21:57:54.524836    2339 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 14 21:57:54 addons-388000 kubelet[2339]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 14 21:57:54 addons-388000 kubelet[2339]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 14 21:57:54 addons-388000 kubelet[2339]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 14 21:58:54 addons-388000 kubelet[2339]: E0914 21:58:54.524887    2339 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 14 21:58:54 addons-388000 kubelet[2339]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 14 21:58:54 addons-388000 kubelet[2339]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 14 21:58:54 addons-388000 kubelet[2339]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 14 21:59:54 addons-388000 kubelet[2339]: E0914 21:59:54.525033    2339 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 14 21:59:54 addons-388000 kubelet[2339]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 14 21:59:54 addons-388000 kubelet[2339]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 14 21:59:54 addons-388000 kubelet[2339]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 14 22:00:54 addons-388000 kubelet[2339]: E0914 22:00:54.524416    2339 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 14 22:00:54 addons-388000 kubelet[2339]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 14 22:00:54 addons-388000 kubelet[2339]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 14 22:00:54 addons-388000 kubelet[2339]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 14 22:01:54 addons-388000 kubelet[2339]: E0914 22:01:54.525398    2339 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 14 22:01:54 addons-388000 kubelet[2339]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 14 22:01:54 addons-388000 kubelet[2339]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 14 22:01:54 addons-388000 kubelet[2339]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 14 22:01:54 addons-388000 kubelet[2339]: W0914 22:01:54.547125    2339 machine.go:65] Cannot read vendor id correctly, set empty.
	Sep 14 22:02:54 addons-388000 kubelet[2339]: E0914 22:02:54.524680    2339 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 14 22:02:54 addons-388000 kubelet[2339]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 14 22:02:54 addons-388000 kubelet[2339]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 14 22:02:54 addons-388000 kubelet[2339]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p addons-388000 -n addons-388000
helpers_test.go:261: (dbg) Run:  kubectl --context addons-388000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/InspektorGadget FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/InspektorGadget (480.83s)

                                                
                                    
x
+
TestAddons/parallel/CSI (374.09s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:537: csi-hostpath-driver pods stabilized in 2.50025ms
addons_test.go:540: (dbg) Run:  kubectl --context addons-388000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:545: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-388000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-388000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-388000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-388000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:550: (dbg) Run:  kubectl --context addons-388000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [a1aed78c-332f-40fa-b16a-f401be53c2c5] Pending
helpers_test.go:344: "task-pv-pod" [a1aed78c-332f-40fa-b16a-f401be53c2c5] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [a1aed78c-332f-40fa-b16a-f401be53c2c5] Running
addons_test.go:555: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.013890709s
addons_test.go:560: (dbg) Run:  kubectl --context addons-388000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:560: (dbg) Non-zero exit: kubectl --context addons-388000 create -f testdata/csi-hostpath-driver/snapshot.yaml: exit status 1 (58.849208ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): error when creating "testdata/csi-hostpath-driver/snapshot.yaml": the server could not find the requested resource (post volumesnapshots.snapshot.storage.k8s.io)

                                                
                                                
** /stderr **
addons_test.go:562: creating pod with kubectl --context addons-388000 create -f testdata/csi-hostpath-driver/snapshot.yaml failed: exit status 1
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (33.995375ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (34.987875ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (36.689125ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (36.74725ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (36.966583ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.133375ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.766708ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.532333ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.392417ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.314917ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.613083ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.89025ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.942125ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.498916ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (36.343084ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (34.479875ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.498667ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.67025ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (36.408791ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.200042ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (36.381542ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (36.619416ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.516166ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (36.660917ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.188958ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (34.490125ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.824667ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.525709ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (34.756625ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (34.04425ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (38.965542ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.391834ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (34.956583ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.344167ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (36.901667ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.089083ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (36.094625ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (36.792334ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.078584ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (34.645125ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.48775ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.291208ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (37.13075ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.2465ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.176959ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (36.629208ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (37.102792ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.547583ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.209542ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (34.961541ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (34.846583ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.637541ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.612666ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.112ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (36.470459ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.080792ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (36.529167ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (36.0785ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.965916ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.278208ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.698917ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (36.867375ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.777417ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (37.497625ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.232625ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (36.1805ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.159209ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (34.674875ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.618708ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (36.604042ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (36.396875ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (37.772666ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.331666ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.417791ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (34.986792ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.097083ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (34.861792ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.438292ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.25175ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (34.968833ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.194459ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (36.234458ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (37.4055ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.88075ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.788625ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (34.3215ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.094833ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.257375ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (34.148375ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (34.71725ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.50225ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.821625ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.718583ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (34.229583ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.595667ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.194458ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (34.995875ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.338125ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (34.851042ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (34.786708ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.492208ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.50125ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.307ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.591167ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (36.560208ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.652417ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.903666ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (36.469333ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.926875ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.35775ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (34.963208ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.866417ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (42.63025ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (34.828583ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.619084ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.55575ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.942625ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (36.079167ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.192166ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.266666ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.564958ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (36.006917ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (37.257792ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (36.193667ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (34.932042ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (34.974542ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (36.493917ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (36.0395ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.143458ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.732416ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.618208ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.579625ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (40.612042ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (34.972042ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (34.876541ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (34.659458ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (36.916209ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.963459ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (36.015625ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (34.867083ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (38.277833ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.390209ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (38.904625ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.243834ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (36.9765ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.404708ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.026708ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.162375ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.878958ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (37.059542ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.587792ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (36.286208ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (39.400125ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.988833ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.728917ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (34.235625ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.216ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.699833ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.192916ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.879ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.214958ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.861625ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.053709ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.138208ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (36.307542ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.616083ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (39.371541ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (34.990375ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.800833ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (36.131667ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.532958ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (36.398ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.696292ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.653709ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (34.699ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (34.670208ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.490958ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (34.899375ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.35175ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (34.469666ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.075709ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (36.216542ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.837375ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.8195ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (36.029917ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (36.069417ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.490833ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.75375ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.699292ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (34.610833ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.765542ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.52075ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.721917ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.360958ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (38.29475ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (34.8705ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (34.962292ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.27425ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (34.896625ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (38.022458ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (37.766958ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.408708ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.815625ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (47.784416ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.100333ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.355958ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (34.99725ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.641084ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.058792ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.593875ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.456208ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.760583ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (36.81675ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.140209ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.613958ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.06775ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (37.139792ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.273875ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.522625ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.480459ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (34.421542ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (36.206958ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (37.847792ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.101792ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.406625ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (33.904083ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (36.004333ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.120042ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (36.096667ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (34.539167ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (34.526834ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (36.924416ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.911708ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (34.823292ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (36.2605ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.426625ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.478875ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (34.787041ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.763709ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (36.929042ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (36.027709ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (38.116292ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (37.340209ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.526166ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (36.0455ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.346333ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (36.456666ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.966834ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.624084ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (36.062541ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (36.074083ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.057709ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (36.476625ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.911542ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.533584ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.1235ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (36.085417ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (36.370334ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.720583ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.990375ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.9365ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.887541ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (38.869958ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.176792ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.858666ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (34.948417ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (37.981917ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (36.226208ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (34.857417ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.969584ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.600042ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.152209ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (37.344625ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.211292ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.908875ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.914792ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.888083ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.4395ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.329542ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.962416ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (34.89525ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (34.730916ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (34.912084ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.014625ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.936042ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (34.714ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (38.195916ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (34.730791ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.773584ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.329917ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.172042ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.0315ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (36.775167ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (34.519167ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.937709ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.337417ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.574167ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (34.619875ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.725834ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.3425ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (34.814875ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.73625ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (36.034ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.385458ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (36.101292ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.430125ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (37.815167ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.119542ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.036292ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.64625ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.37475ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.987833ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (36.653459ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.865916ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (36.088958ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.281375ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (34.809208ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.097334ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.391875ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.480458ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (33.766667ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.481458ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.548083ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (34.600542ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.318542ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.393208ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.776458ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (36.398417ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (34.825917ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.141334ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.991208ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (36.290625ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (37.045917ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (36.068375ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.338ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (36.374792ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.888834ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.781875ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (37.203209ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (37.411542ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.979333ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.620334ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (37.046709ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.655459ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.073084ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.576791ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (40.124333ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.79075ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (36.103125ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.588584ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.218875ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.780625ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (36.407125ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (34.951708ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (37.290416ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.180542ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.873791ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (36.510792ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.387125ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (37.905791ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-388000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (35.776958ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io new-snapshot-demo)

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
addons_test.go:566: failed waiting for volume snapshot new-snapshot-demo: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p addons-388000 -n addons-388000
helpers_test.go:244: <<< TestAddons/parallel/CSI FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/CSI]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p addons-388000 logs -n 25
helpers_test.go:252: TestAddons/parallel/CSI logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-917000 | jenkins | v1.31.2 | 14 Sep 23 14:35 PDT |                     |
	|         | -p download-only-917000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| start   | -o=json --download-only        | download-only-917000 | jenkins | v1.31.2 | 14 Sep 23 14:35 PDT |                     |
	|         | -p download-only-917000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.31.2 | 14 Sep 23 14:36 PDT | 14 Sep 23 14:36 PDT |
	| delete  | -p download-only-917000        | download-only-917000 | jenkins | v1.31.2 | 14 Sep 23 14:36 PDT | 14 Sep 23 14:36 PDT |
	| delete  | -p download-only-917000        | download-only-917000 | jenkins | v1.31.2 | 14 Sep 23 14:36 PDT | 14 Sep 23 14:36 PDT |
	| start   | --download-only -p             | binary-mirror-231000 | jenkins | v1.31.2 | 14 Sep 23 14:36 PDT |                     |
	|         | binary-mirror-231000           |                      |         |         |                     |                     |
	|         | --alsologtostderr              |                      |         |         |                     |                     |
	|         | --binary-mirror                |                      |         |         |                     |                     |
	|         | http://127.0.0.1:49379         |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-231000        | binary-mirror-231000 | jenkins | v1.31.2 | 14 Sep 23 14:36 PDT | 14 Sep 23 14:36 PDT |
	| start   | -p addons-388000               | addons-388000        | jenkins | v1.31.2 | 14 Sep 23 14:36 PDT | 14 Sep 23 14:43 PDT |
	|         | --wait=true --memory=4000      |                      |         |         |                     |                     |
	|         | --alsologtostderr              |                      |         |         |                     |                     |
	|         | --addons=registry              |                      |         |         |                     |                     |
	|         | --addons=metrics-server        |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots       |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver   |                      |         |         |                     |                     |
	|         | --addons=gcp-auth              |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner         |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget      |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|         | --addons=ingress               |                      |         |         |                     |                     |
	|         | --addons=ingress-dns           |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p       | addons-388000        | jenkins | v1.31.2 | 14 Sep 23 14:55 PDT |                     |
	|         | addons-388000                  |                      |         |         |                     |                     |
	| addons  | addons-388000 addons           | addons-388000        | jenkins | v1.31.2 | 14 Sep 23 14:55 PDT | 14 Sep 23 14:55 PDT |
	|         | disable metrics-server         |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                      |         |         |                     |                     |
	| addons  | enable headlamp                | addons-388000        | jenkins | v1.31.2 | 14 Sep 23 14:56 PDT | 14 Sep 23 14:56 PDT |
	|         | -p addons-388000               |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/14 14:36:23
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.21.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0914 14:36:23.572515    1522 out.go:296] Setting OutFile to fd 1 ...
	I0914 14:36:23.572636    1522 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 14:36:23.572639    1522 out.go:309] Setting ErrFile to fd 2...
	I0914 14:36:23.572642    1522 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 14:36:23.572752    1522 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17243-1006/.minikube/bin
	I0914 14:36:23.573756    1522 out.go:303] Setting JSON to false
	I0914 14:36:23.588610    1522 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":357,"bootTime":1694727026,"procs":391,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0914 14:36:23.588683    1522 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0914 14:36:23.593630    1522 out.go:177] * [addons-388000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0914 14:36:23.600459    1522 out.go:177]   - MINIKUBE_LOCATION=17243
	I0914 14:36:23.600497    1522 notify.go:220] Checking for updates...
	I0914 14:36:23.603591    1522 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17243-1006/kubeconfig
	I0914 14:36:23.606425    1522 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 14:36:23.609496    1522 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 14:36:23.612541    1522 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17243-1006/.minikube
	I0914 14:36:23.615423    1522 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 14:36:23.618648    1522 driver.go:373] Setting default libvirt URI to qemu:///system
	I0914 14:36:23.622479    1522 out.go:177] * Using the qemu2 driver based on user configuration
	I0914 14:36:23.629482    1522 start.go:298] selected driver: qemu2
	I0914 14:36:23.629487    1522 start.go:902] validating driver "qemu2" against <nil>
	I0914 14:36:23.629493    1522 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 14:36:23.631382    1522 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0914 14:36:23.634542    1522 out.go:177] * Automatically selected the socket_vmnet network
	I0914 14:36:23.637548    1522 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 14:36:23.637570    1522 cni.go:84] Creating CNI manager for ""
	I0914 14:36:23.637578    1522 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0914 14:36:23.637583    1522 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0914 14:36:23.637590    1522 start_flags.go:321] config:
	{Name:addons-388000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:addons-388000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID
:0 AutoPauseInterval:1m0s}
	I0914 14:36:23.641729    1522 iso.go:125] acquiring lock: {Name:mkd5b68b252c18146a825ff5365883948b3c6983 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 14:36:23.649492    1522 out.go:177] * Starting control plane node addons-388000 in cluster addons-388000
	I0914 14:36:23.653459    1522 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0914 14:36:23.653478    1522 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0914 14:36:23.653492    1522 cache.go:57] Caching tarball of preloaded images
	I0914 14:36:23.653557    1522 preload.go:174] Found /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0914 14:36:23.653564    1522 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0914 14:36:23.653811    1522 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/config.json ...
	I0914 14:36:23.653825    1522 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/config.json: {Name:mk9010c5dfb0ad4a966bb29118112217ba3b6cf7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 14:36:23.654041    1522 start.go:365] acquiring machines lock for addons-388000: {Name:mk831f2bd5a9a0c932c8a59972319d68edb53d07 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 14:36:23.654147    1522 start.go:369] acquired machines lock for "addons-388000" in 99.875µs
	I0914 14:36:23.654159    1522 start.go:93] Provisioning new machine with config: &{Name:addons-388000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.1 ClusterName:addons-388000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 14:36:23.654194    1522 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 14:36:23.662516    1522 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0914 14:36:23.982709    1522 start.go:159] libmachine.API.Create for "addons-388000" (driver="qemu2")
	I0914 14:36:23.982756    1522 client.go:168] LocalClient.Create starting
	I0914 14:36:23.982899    1522 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/ca.pem
	I0914 14:36:24.329911    1522 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/cert.pem
	I0914 14:36:24.425142    1522 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17243-1006/.minikube/cache/iso/arm64/minikube-v1.31.0-1694625400-17243-arm64.iso...
	I0914 14:36:24.784281    1522 main.go:141] libmachine: Creating SSH key...
	I0914 14:36:25.013863    1522 main.go:141] libmachine: Creating Disk image...
	I0914 14:36:25.013874    1522 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 14:36:25.014143    1522 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/disk.qcow2.raw /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/disk.qcow2
	I0914 14:36:25.048599    1522 main.go:141] libmachine: STDOUT: 
	I0914 14:36:25.048634    1522 main.go:141] libmachine: STDERR: 
	I0914 14:36:25.048701    1522 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/disk.qcow2 +20000M
	I0914 14:36:25.056105    1522 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 14:36:25.056122    1522 main.go:141] libmachine: STDERR: 
	I0914 14:36:25.056141    1522 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/disk.qcow2
	I0914 14:36:25.056150    1522 main.go:141] libmachine: Starting QEMU VM...
	I0914 14:36:25.056194    1522 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:ab:b1:c2:6f:25 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/disk.qcow2
	I0914 14:36:25.122275    1522 main.go:141] libmachine: STDOUT: 
	I0914 14:36:25.122322    1522 main.go:141] libmachine: STDERR: 
	I0914 14:36:25.122327    1522 main.go:141] libmachine: Attempt 0
	I0914 14:36:25.122346    1522 main.go:141] libmachine: Searching for fa:ab:b1:c2:6f:25 in /var/db/dhcpd_leases ...
	I0914 14:36:27.123500    1522 main.go:141] libmachine: Attempt 1
	I0914 14:36:27.123581    1522 main.go:141] libmachine: Searching for fa:ab:b1:c2:6f:25 in /var/db/dhcpd_leases ...
	I0914 14:36:29.124764    1522 main.go:141] libmachine: Attempt 2
	I0914 14:36:29.124788    1522 main.go:141] libmachine: Searching for fa:ab:b1:c2:6f:25 in /var/db/dhcpd_leases ...
	I0914 14:36:31.125900    1522 main.go:141] libmachine: Attempt 3
	I0914 14:36:31.125919    1522 main.go:141] libmachine: Searching for fa:ab:b1:c2:6f:25 in /var/db/dhcpd_leases ...
	I0914 14:36:33.126934    1522 main.go:141] libmachine: Attempt 4
	I0914 14:36:33.126945    1522 main.go:141] libmachine: Searching for fa:ab:b1:c2:6f:25 in /var/db/dhcpd_leases ...
	I0914 14:36:35.127988    1522 main.go:141] libmachine: Attempt 5
	I0914 14:36:35.128006    1522 main.go:141] libmachine: Searching for fa:ab:b1:c2:6f:25 in /var/db/dhcpd_leases ...
	I0914 14:36:37.130061    1522 main.go:141] libmachine: Attempt 6
	I0914 14:36:37.130089    1522 main.go:141] libmachine: Searching for fa:ab:b1:c2:6f:25 in /var/db/dhcpd_leases ...
	I0914 14:36:37.130226    1522 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0914 14:36:37.130272    1522 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:fa:ab:b1:c2:6f:25 ID:1,fa:ab:b1:c2:6f:25 Lease:0x6504ce64}
	I0914 14:36:37.130284    1522 main.go:141] libmachine: Found match: fa:ab:b1:c2:6f:25
	I0914 14:36:37.130296    1522 main.go:141] libmachine: IP: 192.168.105.2
	I0914 14:36:37.130304    1522 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.2)...
	I0914 14:36:39.152264    1522 machine.go:88] provisioning docker machine ...
	I0914 14:36:39.152328    1522 buildroot.go:166] provisioning hostname "addons-388000"
	I0914 14:36:39.153898    1522 main.go:141] libmachine: Using SSH client type: native
	I0914 14:36:39.154765    1522 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102894760] 0x102896ed0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0914 14:36:39.154789    1522 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-388000 && echo "addons-388000" | sudo tee /etc/hostname
	I0914 14:36:39.254406    1522 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-388000
	
	I0914 14:36:39.254547    1522 main.go:141] libmachine: Using SSH client type: native
	I0914 14:36:39.254974    1522 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102894760] 0x102896ed0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0914 14:36:39.254987    1522 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-388000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-388000/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-388000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 14:36:39.336783    1522 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 14:36:39.336807    1522 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17243-1006/.minikube CaCertPath:/Users/jenkins/minikube-integration/17243-1006/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17243-1006/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17243-1006/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17243-1006/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17243-1006/.minikube}
	I0914 14:36:39.336834    1522 buildroot.go:174] setting up certificates
	I0914 14:36:39.336842    1522 provision.go:83] configureAuth start
	I0914 14:36:39.336850    1522 provision.go:138] copyHostCerts
	I0914 14:36:39.337062    1522 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17243-1006/.minikube/ca.pem (1082 bytes)
	I0914 14:36:39.337458    1522 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17243-1006/.minikube/cert.pem (1123 bytes)
	I0914 14:36:39.337624    1522 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17243-1006/.minikube/key.pem (1675 bytes)
	I0914 14:36:39.337823    1522 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17243-1006/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17243-1006/.minikube/certs/ca-key.pem org=jenkins.addons-388000 san=[192.168.105.2 192.168.105.2 localhost 127.0.0.1 minikube addons-388000]
	I0914 14:36:39.438902    1522 provision.go:172] copyRemoteCerts
	I0914 14:36:39.438967    1522 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 14:36:39.438977    1522 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/id_rsa Username:docker}
	I0914 14:36:39.475382    1522 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0914 14:36:39.482935    1522 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0914 14:36:39.490611    1522 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0914 14:36:39.498058    1522 provision.go:86] duration metric: configureAuth took 161.21375ms
	I0914 14:36:39.498072    1522 buildroot.go:189] setting minikube options for container-runtime
	I0914 14:36:39.498194    1522 config.go:182] Loaded profile config "addons-388000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0914 14:36:39.498238    1522 main.go:141] libmachine: Using SSH client type: native
	I0914 14:36:39.498454    1522 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102894760] 0x102896ed0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0914 14:36:39.498461    1522 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0914 14:36:39.568371    1522 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0914 14:36:39.568380    1522 buildroot.go:70] root file system type: tmpfs
	I0914 14:36:39.568444    1522 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0914 14:36:39.568493    1522 main.go:141] libmachine: Using SSH client type: native
	I0914 14:36:39.568758    1522 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102894760] 0x102896ed0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0914 14:36:39.568795    1522 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0914 14:36:39.642658    1522 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0914 14:36:39.642714    1522 main.go:141] libmachine: Using SSH client type: native
	I0914 14:36:39.642984    1522 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102894760] 0x102896ed0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0914 14:36:39.642994    1522 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0914 14:36:40.018079    1522 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0914 14:36:40.018095    1522 machine.go:91] provisioned docker machine in 865.825208ms
	I0914 14:36:40.018101    1522 client.go:171] LocalClient.Create took 16.035747292s
	I0914 14:36:40.018112    1522 start.go:167] duration metric: libmachine.API.Create for "addons-388000" took 16.035815708s
	I0914 14:36:40.018117    1522 start.go:300] post-start starting for "addons-388000" (driver="qemu2")
	I0914 14:36:40.018121    1522 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 14:36:40.018186    1522 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 14:36:40.018197    1522 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/id_rsa Username:docker}
	I0914 14:36:40.056512    1522 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 14:36:40.057796    1522 info.go:137] Remote host: Buildroot 2021.02.12
	I0914 14:36:40.057807    1522 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17243-1006/.minikube/addons for local assets ...
	I0914 14:36:40.057875    1522 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17243-1006/.minikube/files for local assets ...
	I0914 14:36:40.057901    1522 start.go:303] post-start completed in 39.782666ms
	I0914 14:36:40.058218    1522 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/config.json ...
	I0914 14:36:40.058366    1522 start.go:128] duration metric: createHost completed in 16.404584042s
	I0914 14:36:40.058389    1522 main.go:141] libmachine: Using SSH client type: native
	I0914 14:36:40.058608    1522 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102894760] 0x102896ed0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0914 14:36:40.058612    1522 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0914 14:36:40.126242    1522 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694727400.596628044
	
	I0914 14:36:40.126252    1522 fix.go:206] guest clock: 1694727400.596628044
	I0914 14:36:40.126256    1522 fix.go:219] Guest: 2023-09-14 14:36:40.596628044 -0700 PDT Remote: 2023-09-14 14:36:40.058369 -0700 PDT m=+16.505601626 (delta=538.259044ms)
	I0914 14:36:40.126267    1522 fix.go:190] guest clock delta is within tolerance: 538.259044ms
	I0914 14:36:40.126272    1522 start.go:83] releasing machines lock for "addons-388000", held for 16.472537s
	I0914 14:36:40.126627    1522 ssh_runner.go:195] Run: cat /version.json
	I0914 14:36:40.126630    1522 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 14:36:40.126636    1522 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/id_rsa Username:docker}
	I0914 14:36:40.126680    1522 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/id_rsa Username:docker}
	I0914 14:36:40.164117    1522 ssh_runner.go:195] Run: systemctl --version
	I0914 14:36:40.279852    1522 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 14:36:40.282756    1522 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 14:36:40.282802    1522 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 14:36:40.290141    1522 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 14:36:40.290164    1522 start.go:469] detecting cgroup driver to use...
	I0914 14:36:40.290325    1522 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 14:36:40.298242    1522 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0914 14:36:40.302485    1522 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0914 14:36:40.306314    1522 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0914 14:36:40.306335    1522 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0914 14:36:40.309906    1522 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0914 14:36:40.313708    1522 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0914 14:36:40.317003    1522 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0914 14:36:40.319988    1522 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 14:36:40.323114    1522 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0914 14:36:40.326593    1522 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 14:36:40.329687    1522 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 14:36:40.332474    1522 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 14:36:40.414020    1522 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0914 14:36:40.421074    1522 start.go:469] detecting cgroup driver to use...
	I0914 14:36:40.421134    1522 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0914 14:36:40.426647    1522 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 14:36:40.431508    1522 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 14:36:40.437031    1522 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 14:36:40.441206    1522 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0914 14:36:40.445778    1522 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0914 14:36:40.494559    1522 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0914 14:36:40.500245    1522 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 14:36:40.506085    1522 ssh_runner.go:195] Run: which cri-dockerd
	I0914 14:36:40.507323    1522 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0914 14:36:40.510306    1522 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0914 14:36:40.515235    1522 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0914 14:36:40.590641    1522 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0914 14:36:40.670685    1522 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0914 14:36:40.670697    1522 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0914 14:36:40.676022    1522 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 14:36:40.753642    1522 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0914 14:36:41.915654    1522 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.162025209s)
	I0914 14:36:41.915719    1522 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0914 14:36:41.996165    1522 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0914 14:36:42.077673    1522 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0914 14:36:42.158787    1522 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 14:36:42.238393    1522 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0914 14:36:42.246223    1522 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 14:36:42.322653    1522 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0914 14:36:42.347035    1522 start.go:516] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0914 14:36:42.347147    1522 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0914 14:36:42.349276    1522 start.go:537] Will wait 60s for crictl version
	I0914 14:36:42.349310    1522 ssh_runner.go:195] Run: which crictl
	I0914 14:36:42.350645    1522 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 14:36:42.367912    1522 start.go:553] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.6
	RuntimeApiVersion:  v1alpha2
	I0914 14:36:42.367994    1522 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0914 14:36:42.377957    1522 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0914 14:36:42.394599    1522 out.go:204] * Preparing Kubernetes v1.28.1 on Docker 24.0.6 ...
	I0914 14:36:42.394744    1522 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0914 14:36:42.396150    1522 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 14:36:42.399678    1522 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0914 14:36:42.399720    1522 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0914 14:36:42.404754    1522 docker.go:636] Got preloaded images: 
	I0914 14:36:42.404761    1522 docker.go:642] registry.k8s.io/kube-apiserver:v1.28.1 wasn't preloaded
	I0914 14:36:42.404801    1522 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0914 14:36:42.407644    1522 ssh_runner.go:195] Run: which lz4
	I0914 14:36:42.408926    1522 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0914 14:36:42.410207    1522 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0914 14:36:42.410221    1522 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (356902558 bytes)
	I0914 14:36:43.758723    1522 docker.go:600] Took 1.349866 seconds to copy over tarball
	I0914 14:36:43.758788    1522 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0914 14:36:44.802481    1522 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.043706042s)
	I0914 14:36:44.802494    1522 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0914 14:36:44.818862    1522 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0914 14:36:44.822486    1522 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0914 14:36:44.827997    1522 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 14:36:44.904406    1522 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0914 14:36:47.070320    1522 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.165952375s)
	I0914 14:36:47.070426    1522 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0914 14:36:47.076673    1522 docker.go:636] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.1
	registry.k8s.io/kube-scheduler:v1.28.1
	registry.k8s.io/kube-proxy:v1.28.1
	registry.k8s.io/kube-controller-manager:v1.28.1
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0914 14:36:47.076684    1522 cache_images.go:84] Images are preloaded, skipping loading
	I0914 14:36:47.076750    1522 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0914 14:36:47.084410    1522 cni.go:84] Creating CNI manager for ""
	I0914 14:36:47.084420    1522 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0914 14:36:47.084443    1522 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0914 14:36:47.084452    1522 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.2 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-388000 NodeName:addons-388000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 14:36:47.084527    1522 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-388000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 14:36:47.084571    1522 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=addons-388000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:addons-388000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0914 14:36:47.084633    1522 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0914 14:36:47.087471    1522 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 14:36:47.087501    1522 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 14:36:47.090481    1522 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I0914 14:36:47.095702    1522 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 14:36:47.100584    1522 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2099 bytes)
	I0914 14:36:47.105532    1522 ssh_runner.go:195] Run: grep 192.168.105.2	control-plane.minikube.internal$ /etc/hosts
	I0914 14:36:47.106963    1522 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 14:36:47.110892    1522 certs.go:56] Setting up /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000 for IP: 192.168.105.2
	I0914 14:36:47.110903    1522 certs.go:190] acquiring lock for shared ca certs: {Name:mkd19d6e2143685b57ba1e0d43c4081bbdb26a79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 14:36:47.111053    1522 certs.go:204] generating minikubeCA CA: /Users/jenkins/minikube-integration/17243-1006/.minikube/ca.key
	I0914 14:36:47.228830    1522 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17243-1006/.minikube/ca.crt ...
	I0914 14:36:47.228840    1522 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17243-1006/.minikube/ca.crt: {Name:mk1c10f9290e336c983838c8c09bb8cd18a9a4c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 14:36:47.229095    1522 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17243-1006/.minikube/ca.key ...
	I0914 14:36:47.229099    1522 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17243-1006/.minikube/ca.key: {Name:mkbc669c78b9b93a07aa566669e7e92430fec9e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 14:36:47.229219    1522 certs.go:204] generating proxyClientCA CA: /Users/jenkins/minikube-integration/17243-1006/.minikube/proxy-client-ca.key
	I0914 14:36:47.333428    1522 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17243-1006/.minikube/proxy-client-ca.crt ...
	I0914 14:36:47.333432    1522 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17243-1006/.minikube/proxy-client-ca.crt: {Name:mk85d65dc023d08a0f4cb19cc395e69f12c9ed1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 14:36:47.333577    1522 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17243-1006/.minikube/proxy-client-ca.key ...
	I0914 14:36:47.333579    1522 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17243-1006/.minikube/proxy-client-ca.key: {Name:mk62bc08bafeee956e88b9480bac37c2df91bf30 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 14:36:47.333721    1522 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/client.key
	I0914 14:36:47.333730    1522 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/client.crt with IP's: []
	I0914 14:36:47.598337    1522 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/client.crt ...
	I0914 14:36:47.598352    1522 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/client.crt: {Name:mk8ecd4e838807718c7ef97bafd599d3b7fd1a4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 14:36:47.598702    1522 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/client.key ...
	I0914 14:36:47.598710    1522 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/client.key: {Name:mk3960bc5fb536243466f07f9f23680cfa92d826 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 14:36:47.598826    1522 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/apiserver.key.96055969
	I0914 14:36:47.598838    1522 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/apiserver.crt.96055969 with IP's: [192.168.105.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0914 14:36:47.656638    1522 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/apiserver.crt.96055969 ...
	I0914 14:36:47.656642    1522 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/apiserver.crt.96055969: {Name:mk3691ba24392ca70b8d7adb6c837bd5b52dfeeb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 14:36:47.656789    1522 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/apiserver.key.96055969 ...
	I0914 14:36:47.656792    1522 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/apiserver.key.96055969: {Name:mk7619af569a08784491e3a0055c754ead430eca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 14:36:47.656913    1522 certs.go:337] copying /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/apiserver.crt.96055969 -> /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/apiserver.crt
	I0914 14:36:47.657047    1522 certs.go:341] copying /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/apiserver.key.96055969 -> /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/apiserver.key
	I0914 14:36:47.657134    1522 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/proxy-client.key
	I0914 14:36:47.657146    1522 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/proxy-client.crt with IP's: []
	I0914 14:36:47.715161    1522 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/proxy-client.crt ...
	I0914 14:36:47.715165    1522 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/proxy-client.crt: {Name:mk5c5221c842b768f8e9ba880dc08acd610bf8dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 14:36:47.715298    1522 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/proxy-client.key ...
	I0914 14:36:47.715301    1522 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/proxy-client.key: {Name:mk620ca3f197a51ffd017e6711b4bab26fb15d79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 14:36:47.715560    1522 certs.go:437] found cert: /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/Users/jenkins/minikube-integration/17243-1006/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 14:36:47.715594    1522 certs.go:437] found cert: /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/Users/jenkins/minikube-integration/17243-1006/.minikube/certs/ca.pem (1082 bytes)
	I0914 14:36:47.715621    1522 certs.go:437] found cert: /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/Users/jenkins/minikube-integration/17243-1006/.minikube/certs/cert.pem (1123 bytes)
	I0914 14:36:47.715645    1522 certs.go:437] found cert: /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/Users/jenkins/minikube-integration/17243-1006/.minikube/certs/key.pem (1675 bytes)
	I0914 14:36:47.716027    1522 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0914 14:36:47.723894    1522 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0914 14:36:47.731037    1522 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 14:36:47.738379    1522 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0914 14:36:47.745927    1522 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17243-1006/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 14:36:47.752925    1522 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17243-1006/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0914 14:36:47.759542    1522 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17243-1006/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 14:36:47.766602    1522 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17243-1006/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0914 14:36:47.773763    1522 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17243-1006/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 14:36:47.780697    1522 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 14:36:47.786484    1522 ssh_runner.go:195] Run: openssl version
	I0914 14:36:47.788649    1522 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 14:36:47.791615    1522 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 14:36:47.793075    1522 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 14 21:36 /usr/share/ca-certificates/minikubeCA.pem
	I0914 14:36:47.793092    1522 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 14:36:47.794978    1522 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 14:36:47.798423    1522 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0914 14:36:47.799931    1522 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0914 14:36:47.799971    1522 kubeadm.go:404] StartCluster: {Name:addons-388000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.1 ClusterName:addons-388000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 14:36:47.800034    1522 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0914 14:36:47.805504    1522 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 14:36:47.808480    1522 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 14:36:47.811111    1522 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 14:36:47.814398    1522 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 14:36:47.814412    1522 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0914 14:36:47.835210    1522 kubeadm.go:322] [init] Using Kubernetes version: v1.28.1
	I0914 14:36:47.835254    1522 kubeadm.go:322] [preflight] Running pre-flight checks
	I0914 14:36:47.889698    1522 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 14:36:47.889750    1522 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 14:36:47.889794    1522 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0914 14:36:47.952261    1522 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 14:36:47.962464    1522 out.go:204]   - Generating certificates and keys ...
	I0914 14:36:47.962497    1522 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0914 14:36:47.962525    1522 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0914 14:36:48.025951    1522 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0914 14:36:48.134925    1522 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0914 14:36:48.186988    1522 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0914 14:36:48.299178    1522 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0914 14:36:48.429498    1522 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0914 14:36:48.429557    1522 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-388000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0914 14:36:48.510620    1522 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0914 14:36:48.510686    1522 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-388000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0914 14:36:48.631510    1522 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0914 14:36:48.668002    1522 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0914 14:36:48.726941    1522 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0914 14:36:48.726969    1522 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 14:36:48.823035    1522 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 14:36:48.918005    1522 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 14:36:49.052610    1522 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 14:36:49.136045    1522 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 14:36:49.136292    1522 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 14:36:49.138218    1522 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 14:36:49.141449    1522 out.go:204]   - Booting up control plane ...
	I0914 14:36:49.141518    1522 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 14:36:49.141563    1522 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 14:36:49.141596    1522 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 14:36:49.146098    1522 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 14:36:49.146527    1522 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 14:36:49.146584    1522 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0914 14:36:49.235726    1522 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0914 14:36:53.234480    1522 kubeadm.go:322] [apiclient] All control plane components are healthy after 4.002199 seconds
	I0914 14:36:53.234548    1522 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0914 14:36:53.240692    1522 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0914 14:36:53.748795    1522 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0914 14:36:53.748894    1522 kubeadm.go:322] [mark-control-plane] Marking the node addons-388000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0914 14:36:54.253997    1522 kubeadm.go:322] [bootstrap-token] Using token: v43sey.bixdamecwwaf1quf
	I0914 14:36:54.261418    1522 out.go:204]   - Configuring RBAC rules ...
	I0914 14:36:54.261475    1522 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0914 14:36:54.262616    1522 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0914 14:36:54.269041    1522 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0914 14:36:54.270041    1522 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0914 14:36:54.271028    1522 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0914 14:36:54.272209    1522 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0914 14:36:54.276273    1522 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0914 14:36:54.432396    1522 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0914 14:36:54.665469    1522 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0914 14:36:54.665894    1522 kubeadm.go:322] 
	I0914 14:36:54.665937    1522 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0914 14:36:54.665940    1522 kubeadm.go:322] 
	I0914 14:36:54.665992    1522 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0914 14:36:54.665996    1522 kubeadm.go:322] 
	I0914 14:36:54.666008    1522 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0914 14:36:54.666036    1522 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0914 14:36:54.666071    1522 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0914 14:36:54.666074    1522 kubeadm.go:322] 
	I0914 14:36:54.666099    1522 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0914 14:36:54.666101    1522 kubeadm.go:322] 
	I0914 14:36:54.666123    1522 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0914 14:36:54.666126    1522 kubeadm.go:322] 
	I0914 14:36:54.666148    1522 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0914 14:36:54.666182    1522 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0914 14:36:54.666217    1522 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0914 14:36:54.666220    1522 kubeadm.go:322] 
	I0914 14:36:54.666261    1522 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0914 14:36:54.666306    1522 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0914 14:36:54.666308    1522 kubeadm.go:322] 
	I0914 14:36:54.666396    1522 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token v43sey.bixdamecwwaf1quf \
	I0914 14:36:54.666457    1522 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:64a56fc8b6798e154cd69d815313de8f6cddb906dc6093ac20ba940ac7c1d871 \
	I0914 14:36:54.666472    1522 kubeadm.go:322] 	--control-plane 
	I0914 14:36:54.666475    1522 kubeadm.go:322] 
	I0914 14:36:54.666513    1522 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0914 14:36:54.666517    1522 kubeadm.go:322] 
	I0914 14:36:54.666553    1522 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token v43sey.bixdamecwwaf1quf \
	I0914 14:36:54.666621    1522 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:64a56fc8b6798e154cd69d815313de8f6cddb906dc6093ac20ba940ac7c1d871 
	I0914 14:36:54.666672    1522 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 14:36:54.666677    1522 cni.go:84] Creating CNI manager for ""
	I0914 14:36:54.666685    1522 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0914 14:36:54.674398    1522 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 14:36:54.677531    1522 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 14:36:54.681843    1522 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0914 14:36:54.686762    1522 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 14:36:54.686820    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:36:54.686837    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=d7492f2ae2d9b6e62b385ffcd97ebad62c645e82 minikube.k8s.io/name=addons-388000 minikube.k8s.io/updated_at=2023_09_14T14_36_54_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:36:54.745761    1522 ops.go:34] apiserver oom_adj: -16
	I0914 14:36:54.751811    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:36:54.783862    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:36:55.319135    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:36:55.819146    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:36:56.319044    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:36:56.817396    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:36:57.317676    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:36:57.819036    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:36:58.319007    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:36:58.819025    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:36:59.318963    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:36:59.819032    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:37:00.318959    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:37:00.819007    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:37:01.318925    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:37:01.819004    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:37:02.318900    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:37:02.818938    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:37:03.318896    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:37:03.818843    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:37:04.318914    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:37:04.818824    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:37:05.318789    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:37:05.818890    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:37:06.318784    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:37:06.818791    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:37:07.318787    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:37:07.357143    1522 kubeadm.go:1081] duration metric: took 12.670689708s to wait for elevateKubeSystemPrivileges.
	I0914 14:37:07.357158    1522 kubeadm.go:406] StartCluster complete in 19.557685291s
	I0914 14:37:07.357184    1522 settings.go:142] acquiring lock: {Name:mkcccc97e247e7e1b2e556ccc64336c05a92af53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 14:37:07.357360    1522 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/17243-1006/kubeconfig
	I0914 14:37:07.357606    1522 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17243-1006/kubeconfig: {Name:mkeec13fc5a79792669e9cedabfbe21efeb27d3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 14:37:07.357803    1522 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0914 14:37:07.357856    1522 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:true]
	I0914 14:37:07.357902    1522 addons.go:69] Setting volumesnapshots=true in profile "addons-388000"
	I0914 14:37:07.357909    1522 addons.go:231] Setting addon volumesnapshots=true in "addons-388000"
	I0914 14:37:07.357912    1522 addons.go:69] Setting ingress=true in profile "addons-388000"
	I0914 14:37:07.357919    1522 addons.go:231] Setting addon ingress=true in "addons-388000"
	I0914 14:37:07.357926    1522 host.go:66] Checking if "addons-388000" exists ...
	I0914 14:37:07.357934    1522 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-388000"
	I0914 14:37:07.357942    1522 host.go:66] Checking if "addons-388000" exists ...
	I0914 14:37:07.357951    1522 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-388000"
	I0914 14:37:07.357967    1522 host.go:66] Checking if "addons-388000" exists ...
	I0914 14:37:07.357975    1522 addons.go:69] Setting ingress-dns=true in profile "addons-388000"
	I0914 14:37:07.357985    1522 addons.go:69] Setting metrics-server=true in profile "addons-388000"
	I0914 14:37:07.358004    1522 addons.go:231] Setting addon ingress-dns=true in "addons-388000"
	I0914 14:37:07.358008    1522 addons.go:231] Setting addon metrics-server=true in "addons-388000"
	I0914 14:37:07.358046    1522 host.go:66] Checking if "addons-388000" exists ...
	I0914 14:37:07.358051    1522 host.go:66] Checking if "addons-388000" exists ...
	I0914 14:37:07.358066    1522 addons.go:69] Setting inspektor-gadget=true in profile "addons-388000"
	I0914 14:37:07.358074    1522 addons.go:231] Setting addon inspektor-gadget=true in "addons-388000"
	I0914 14:37:07.358086    1522 host.go:66] Checking if "addons-388000" exists ...
	I0914 14:37:07.358133    1522 config.go:182] Loaded profile config "addons-388000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0914 14:37:07.358210    1522 addons.go:69] Setting registry=true in profile "addons-388000"
	I0914 14:37:07.358222    1522 addons.go:231] Setting addon registry=true in "addons-388000"
	I0914 14:37:07.358259    1522 addons.go:69] Setting cloud-spanner=true in profile "addons-388000"
	I0914 14:37:07.358263    1522 addons.go:69] Setting default-storageclass=true in profile "addons-388000"
	I0914 14:37:07.358265    1522 addons.go:231] Setting addon cloud-spanner=true in "addons-388000"
	I0914 14:37:07.358266    1522 host.go:66] Checking if "addons-388000" exists ...
	I0914 14:37:07.358273    1522 addons.go:69] Setting storage-provisioner=true in profile "addons-388000"
	I0914 14:37:07.358276    1522 host.go:66] Checking if "addons-388000" exists ...
	I0914 14:37:07.358278    1522 addons.go:231] Setting addon storage-provisioner=true in "addons-388000"
	I0914 14:37:07.358289    1522 host.go:66] Checking if "addons-388000" exists ...
	W0914 14:37:07.358332    1522 host.go:54] host status for "addons-388000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/monitor: connect: connection refused
	W0914 14:37:07.358339    1522 addons.go:277] "addons-388000" is not running, setting inspektor-gadget=true and skipping enablement (err=<nil>)
	W0914 14:37:07.358450    1522 host.go:54] host status for "addons-388000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/monitor: connect: connection refused
	W0914 14:37:07.358453    1522 addons.go:277] "addons-388000" is not running, setting volumesnapshots=true and skipping enablement (err=<nil>)
	W0914 14:37:07.358483    1522 host.go:54] host status for "addons-388000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/monitor: connect: connection refused
	W0914 14:37:07.358489    1522 addons.go:277] "addons-388000" is not running, setting cloud-spanner=true and skipping enablement (err=<nil>)
	W0914 14:37:07.358257    1522 host.go:54] host status for "addons-388000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/monitor: connect: connection refused
	W0914 14:37:07.358494    1522 addons.go:277] "addons-388000" is not running, setting ingress=true and skipping enablement (err=<nil>)
	I0914 14:37:07.358496    1522 addons.go:467] Verifying addon ingress=true in "addons-388000"
	W0914 14:37:07.358500    1522 host.go:54] host status for "addons-388000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/monitor: connect: connection refused
	W0914 14:37:07.358504    1522 addons.go:277] "addons-388000" is not running, setting storage-provisioner=true and skipping enablement (err=<nil>)
	I0914 14:37:07.363429    1522 out.go:177] * Verifying ingress addon...
	I0914 14:37:07.358269    1522 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-388000"
	W0914 14:37:07.358528    1522 host.go:54] host status for "addons-388000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/monitor: connect: connection refused
	I0914 14:37:07.358271    1522 addons.go:69] Setting gcp-auth=true in profile "addons-388000"
	W0914 14:37:07.358722    1522 host.go:54] host status for "addons-388000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/monitor: connect: connection refused
	W0914 14:37:07.370487    1522 addons.go:277] "addons-388000" is not running, setting ingress-dns=true and skipping enablement (err=<nil>)
	I0914 14:37:07.370528    1522 mustload.go:65] Loading cluster: addons-388000
	W0914 14:37:07.370533    1522 addons.go:277] "addons-388000" is not running, setting registry=true and skipping enablement (err=<nil>)
	I0914 14:37:07.370877    1522 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0914 14:37:07.371899    1522 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-388000" context rescaled to 1 replicas
	I0914 14:37:07.372685    1522 addons.go:231] Setting addon default-storageclass=true in "addons-388000"
	I0914 14:37:07.374445    1522 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0914 14:37:07.377503    1522 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I0914 14:37:07.377530    1522 addons.go:467] Verifying addon registry=true in "addons-388000"
	I0914 14:37:07.377544    1522 host.go:66] Checking if "addons-388000" exists ...
	I0914 14:37:07.377592    1522 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 14:37:07.377611    1522 config.go:182] Loaded profile config "addons-388000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0914 14:37:07.379668    1522 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0914 14:37:07.387418    1522 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0914 14:37:07.384502    1522 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0914 14:37:07.385215    1522 host.go:66] Checking if "addons-388000" exists ...
	I0914 14:37:07.385566    1522 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 14:37:07.399476    1522 out.go:177] * Verifying Kubernetes components...
	I0914 14:37:07.399484    1522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0914 14:37:07.405519    1522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 14:37:07.405539    1522 out.go:177] * Verifying registry addon...
	I0914 14:37:07.409300    1522 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0914 14:37:07.413413    1522 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0914 14:37:07.409310    1522 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/id_rsa Username:docker}
	I0914 14:37:07.409318    1522 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/id_rsa Username:docker}
	I0914 14:37:07.413772    1522 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0914 14:37:07.421473    1522 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0914 14:37:07.425266    1522 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 14:37:07.434436    1522 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0914 14:37:07.437375    1522 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=registry
	I0914 14:37:07.438456    1522 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0914 14:37:07.450436    1522 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0914 14:37:07.460462    1522 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0914 14:37:07.463476    1522 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0914 14:37:07.463485    1522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0914 14:37:07.463494    1522 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/id_rsa Username:docker}
	I0914 14:37:07.497507    1522 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0914 14:37:07.497516    1522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0914 14:37:07.503780    1522 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0914 14:37:07.503787    1522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0914 14:37:07.509075    1522 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 14:37:07.509081    1522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0914 14:37:07.516870    1522 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 14:37:07.522898    1522 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 14:37:07.539508    1522 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0914 14:37:07.539521    1522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0914 14:37:07.591865    1522 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0914 14:37:07.591879    1522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0914 14:37:07.635732    1522 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0914 14:37:07.635742    1522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0914 14:37:07.644322    1522 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0914 14:37:07.644333    1522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0914 14:37:07.649557    1522 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0914 14:37:07.649568    1522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0914 14:37:07.681313    1522 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0914 14:37:07.681325    1522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0914 14:37:07.685931    1522 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0914 14:37:07.685936    1522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0914 14:37:07.690914    1522 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0914 14:37:07.690921    1522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0914 14:37:07.695920    1522 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0914 14:37:07.695926    1522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0914 14:37:07.700851    1522 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0914 14:37:07.700856    1522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0914 14:37:07.705677    1522 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0914 14:37:08.213892    1522 start.go:917] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I0914 14:37:08.214323    1522 node_ready.go:35] waiting up to 6m0s for node "addons-388000" to be "Ready" ...
	I0914 14:37:08.215929    1522 node_ready.go:49] node "addons-388000" has status "Ready":"True"
	I0914 14:37:08.215948    1522 node_ready.go:38] duration metric: took 1.599458ms waiting for node "addons-388000" to be "Ready" ...
	I0914 14:37:08.215953    1522 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 14:37:08.218780    1522 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-6php8" in "kube-system" namespace to be "Ready" ...
	I0914 14:37:08.378405    1522 addons.go:467] Verifying addon metrics-server=true in "addons-388000"
	I0914 14:37:08.878056    1522 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.172383083s)
	I0914 14:37:08.878074    1522 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-388000"
	I0914 14:37:08.882346    1522 out.go:177] * Verifying csi-hostpath-driver addon...
	I0914 14:37:08.892719    1522 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0914 14:37:08.895508    1522 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0914 14:37:08.895515    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:08.901644    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:09.404389    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:09.734233    1522 pod_ready.go:97] error getting pod "coredns-5dd5756b68-6php8" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-6php8" not found
	I0914 14:37:09.734244    1522 pod_ready.go:81] duration metric: took 1.515495542s waiting for pod "coredns-5dd5756b68-6php8" in "kube-system" namespace to be "Ready" ...
	E0914 14:37:09.734250    1522 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-6php8" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-6php8" not found
	I0914 14:37:09.734253    1522 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-psn28" in "kube-system" namespace to be "Ready" ...
	I0914 14:37:09.736576    1522 pod_ready.go:92] pod "coredns-5dd5756b68-psn28" in "kube-system" namespace has status "Ready":"True"
	I0914 14:37:09.736583    1522 pod_ready.go:81] duration metric: took 2.327542ms waiting for pod "coredns-5dd5756b68-psn28" in "kube-system" namespace to be "Ready" ...
	I0914 14:37:09.736588    1522 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-388000" in "kube-system" namespace to be "Ready" ...
	I0914 14:37:09.739033    1522 pod_ready.go:92] pod "etcd-addons-388000" in "kube-system" namespace has status "Ready":"True"
	I0914 14:37:09.739038    1522 pod_ready.go:81] duration metric: took 2.447792ms waiting for pod "etcd-addons-388000" in "kube-system" namespace to be "Ready" ...
	I0914 14:37:09.739041    1522 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-388000" in "kube-system" namespace to be "Ready" ...
	I0914 14:37:09.741595    1522 pod_ready.go:92] pod "kube-apiserver-addons-388000" in "kube-system" namespace has status "Ready":"True"
	I0914 14:37:09.741601    1522 pod_ready.go:81] duration metric: took 2.556083ms waiting for pod "kube-apiserver-addons-388000" in "kube-system" namespace to be "Ready" ...
	I0914 14:37:09.741605    1522 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-388000" in "kube-system" namespace to be "Ready" ...
	I0914 14:37:09.904411    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:10.016583    1522 pod_ready.go:92] pod "kube-controller-manager-addons-388000" in "kube-system" namespace has status "Ready":"True"
	I0914 14:37:10.016591    1522 pod_ready.go:81] duration metric: took 274.98975ms waiting for pod "kube-controller-manager-addons-388000" in "kube-system" namespace to be "Ready" ...
	I0914 14:37:10.016595    1522 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8pbsf" in "kube-system" namespace to be "Ready" ...
	I0914 14:37:10.404994    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:10.417030    1522 pod_ready.go:92] pod "kube-proxy-8pbsf" in "kube-system" namespace has status "Ready":"True"
	I0914 14:37:10.417036    1522 pod_ready.go:81] duration metric: took 400.447833ms waiting for pod "kube-proxy-8pbsf" in "kube-system" namespace to be "Ready" ...
	I0914 14:37:10.417041    1522 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-388000" in "kube-system" namespace to be "Ready" ...
	I0914 14:37:10.816814    1522 pod_ready.go:92] pod "kube-scheduler-addons-388000" in "kube-system" namespace has status "Ready":"True"
	I0914 14:37:10.816823    1522 pod_ready.go:81] duration metric: took 399.789417ms waiting for pod "kube-scheduler-addons-388000" in "kube-system" namespace to be "Ready" ...
	I0914 14:37:10.816827    1522 pod_ready.go:38] duration metric: took 2.600935083s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 14:37:10.816835    1522 api_server.go:52] waiting for apiserver process to appear ...
	I0914 14:37:10.816886    1522 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 14:37:10.821727    1522 api_server.go:72] duration metric: took 3.437324417s to wait for apiserver process to appear ...
	I0914 14:37:10.821733    1522 api_server.go:88] waiting for apiserver healthz status ...
	I0914 14:37:10.821738    1522 api_server.go:253] Checking apiserver healthz at https://192.168.105.2:8443/healthz ...
	I0914 14:37:10.825342    1522 api_server.go:279] https://192.168.105.2:8443/healthz returned 200:
	ok
	I0914 14:37:10.826107    1522 api_server.go:141] control plane version: v1.28.1
	I0914 14:37:10.826114    1522 api_server.go:131] duration metric: took 4.378333ms to wait for apiserver health ...
	I0914 14:37:10.826117    1522 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 14:37:10.904363    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:11.018876    1522 system_pods.go:59] 10 kube-system pods found
	I0914 14:37:11.018886    1522 system_pods.go:61] "coredns-5dd5756b68-psn28" [50c0e128-9a93-456c-83af-dfbcda64eaa4] Running
	I0914 14:37:11.018891    1522 system_pods.go:61] "csi-hostpath-attacher-0" [29be2dba-12b9-4442-8c83-8d24fd054a90] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0914 14:37:11.018894    1522 system_pods.go:61] "csi-hostpath-resizer-0" [11fcc7a2-d176-442f-9cd6-04668da8d423] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0914 14:37:11.018898    1522 system_pods.go:61] "csi-hostpathplugin-b5k2m" [aa03259b-6f1a-4537-95f6-47e8cf8fcc39] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0914 14:37:11.018909    1522 system_pods.go:61] "etcd-addons-388000" [b91a9e99-fb10-4340-977e-536225df8415] Running
	I0914 14:37:11.018914    1522 system_pods.go:61] "kube-apiserver-addons-388000" [43fed39d-32f3-4b45-b43c-d9918758a66c] Running
	I0914 14:37:11.018917    1522 system_pods.go:61] "kube-controller-manager-addons-388000" [31eb0c68-03ca-4907-921b-14ccef970edf] Running
	I0914 14:37:11.018920    1522 system_pods.go:61] "kube-proxy-8pbsf" [e9d3ab50-7594-4360-8226-d37e954aca6e] Running
	I0914 14:37:11.018923    1522 system_pods.go:61] "kube-scheduler-addons-388000" [d931a34d-1c14-4544-80cd-ce847a1f1af8] Running
	I0914 14:37:11.018927    1522 system_pods.go:61] "metrics-server-7c66d45ddc-dtj78" [7b539063-f45b-4a15-97e7-6713ea57e519] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 14:37:11.018932    1522 system_pods.go:74] duration metric: took 192.817125ms to wait for pod list to return data ...
	I0914 14:37:11.018935    1522 default_sa.go:34] waiting for default service account to be created ...
	I0914 14:37:11.216117    1522 default_sa.go:45] found service account: "default"
	I0914 14:37:11.216127    1522 default_sa.go:55] duration metric: took 197.1925ms for default service account to be created ...
	I0914 14:37:11.216130    1522 system_pods.go:116] waiting for k8s-apps to be running ...
	I0914 14:37:11.404125    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:11.419144    1522 system_pods.go:86] 10 kube-system pods found
	I0914 14:37:11.419151    1522 system_pods.go:89] "coredns-5dd5756b68-psn28" [50c0e128-9a93-456c-83af-dfbcda64eaa4] Running
	I0914 14:37:11.419155    1522 system_pods.go:89] "csi-hostpath-attacher-0" [29be2dba-12b9-4442-8c83-8d24fd054a90] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0914 14:37:11.419158    1522 system_pods.go:89] "csi-hostpath-resizer-0" [11fcc7a2-d176-442f-9cd6-04668da8d423] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0914 14:37:11.419163    1522 system_pods.go:89] "csi-hostpathplugin-b5k2m" [aa03259b-6f1a-4537-95f6-47e8cf8fcc39] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0914 14:37:11.419167    1522 system_pods.go:89] "etcd-addons-388000" [b91a9e99-fb10-4340-977e-536225df8415] Running
	I0914 14:37:11.419169    1522 system_pods.go:89] "kube-apiserver-addons-388000" [43fed39d-32f3-4b45-b43c-d9918758a66c] Running
	I0914 14:37:11.419176    1522 system_pods.go:89] "kube-controller-manager-addons-388000" [31eb0c68-03ca-4907-921b-14ccef970edf] Running
	I0914 14:37:11.419178    1522 system_pods.go:89] "kube-proxy-8pbsf" [e9d3ab50-7594-4360-8226-d37e954aca6e] Running
	I0914 14:37:11.419180    1522 system_pods.go:89] "kube-scheduler-addons-388000" [d931a34d-1c14-4544-80cd-ce847a1f1af8] Running
	I0914 14:37:11.419183    1522 system_pods.go:89] "metrics-server-7c66d45ddc-dtj78" [7b539063-f45b-4a15-97e7-6713ea57e519] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 14:37:11.419189    1522 system_pods.go:126] duration metric: took 203.059ms to wait for k8s-apps to be running ...
	I0914 14:37:11.419193    1522 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 14:37:11.419242    1522 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 14:37:11.424702    1522 system_svc.go:56] duration metric: took 5.506625ms WaitForService to wait for kubelet.
	I0914 14:37:11.424708    1522 kubeadm.go:581] duration metric: took 4.040322208s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0914 14:37:11.424718    1522 node_conditions.go:102] verifying NodePressure condition ...
	I0914 14:37:11.616510    1522 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0914 14:37:11.616524    1522 node_conditions.go:123] node cpu capacity is 2
	I0914 14:37:11.616531    1522 node_conditions.go:105] duration metric: took 191.81375ms to run NodePressure ...
	I0914 14:37:11.616536    1522 start.go:228] waiting for startup goroutines ...
	I0914 14:37:11.904062    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:12.404356    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:12.904283    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:13.404719    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:13.905195    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:14.010940    1522 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0914 14:37:14.010958    1522 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/id_rsa Username:docker}
	I0914 14:37:14.050416    1522 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0914 14:37:14.056158    1522 addons.go:231] Setting addon gcp-auth=true in "addons-388000"
	I0914 14:37:14.056180    1522 host.go:66] Checking if "addons-388000" exists ...
	I0914 14:37:14.056914    1522 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0914 14:37:14.056921    1522 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/id_rsa Username:docker}
	I0914 14:37:14.098984    1522 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0914 14:37:14.102963    1522 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0914 14:37:14.106843    1522 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0914 14:37:14.106851    1522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0914 14:37:14.112250    1522 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0914 14:37:14.112259    1522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0914 14:37:14.117057    1522 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0914 14:37:14.117063    1522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5412 bytes)
	I0914 14:37:14.122524    1522 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0914 14:37:14.407542    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:14.453711    1522 addons.go:467] Verifying addon gcp-auth=true in "addons-388000"
	I0914 14:37:14.458827    1522 out.go:177] * Verifying gcp-auth addon...
	I0914 14:37:14.469206    1522 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0914 14:37:14.473873    1522 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0914 14:37:14.473883    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:14.477552    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:14.905449    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:14.981028    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:15.404241    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:15.481017    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:15.904406    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:15.981050    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:16.404161    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:16.481356    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:16.904348    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:16.980852    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:17.404432    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:17.480937    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:17.904061    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:17.980969    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:18.404491    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:18.481031    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:18.904020    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:18.981054    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:19.405323    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:19.480019    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:19.904276    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:19.980839    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:20.404204    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:20.481250    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:20.904037    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:20.981407    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:21.404239    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:21.481248    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:21.904261    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:21.981109    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:22.405094    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:22.481049    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:22.904407    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:22.981227    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:23.404066    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:23.480779    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:23.904000    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:23.980955    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:24.404182    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:24.480903    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:24.904034    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:24.980896    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:25.403993    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:25.480949    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:25.903717    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:25.981591    1522 kapi.go:107] duration metric: took 11.512675166s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0914 14:37:25.985811    1522 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-388000 cluster.
	I0914 14:37:25.990747    1522 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0914 14:37:25.993661    1522 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0914 14:37:26.404089    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:26.904132    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:27.405664    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:27.903941    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:28.403884    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:28.903901    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:29.404487    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:29.903852    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:30.404685    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:30.903890    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:31.403753    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:31.903926    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:32.404318    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:32.903835    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:33.403834    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:33.903687    1522 kapi.go:107] duration metric: took 25.011601375s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0914 14:43:07.370409    1522 kapi.go:107] duration metric: took 6m0.008648916s to wait for app.kubernetes.io/name=ingress-nginx ...
	W0914 14:43:07.370479    1522 out.go:239] ! Enabling 'ingress' returned an error: running callbacks: [waiting for app.kubernetes.io/name=ingress-nginx pods: context deadline exceeded]
	I0914 14:43:07.418192    1522 kapi.go:107] duration metric: took 6m0.013534334s to wait for kubernetes.io/minikube-addons=registry ...
	W0914 14:43:07.418227    1522 out.go:239] ! Enabling 'registry' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=registry pods: context deadline exceeded]
	I0914 14:43:07.425587    1522 out.go:177] * Enabled addons: inspektor-gadget, volumesnapshots, cloud-spanner, storage-provisioner, ingress-dns, default-storageclass, metrics-server, gcp-auth, csi-hostpath-driver
	I0914 14:43:07.433636    1522 addons.go:502] enable addons completed in 6m0.084906709s: enabled=[inspektor-gadget volumesnapshots cloud-spanner storage-provisioner ingress-dns default-storageclass metrics-server gcp-auth csi-hostpath-driver]
	I0914 14:43:07.433650    1522 start.go:233] waiting for cluster config update ...
	I0914 14:43:07.433664    1522 start.go:242] writing updated cluster config ...
	I0914 14:43:07.433996    1522 ssh_runner.go:195] Run: rm -f paused
	I0914 14:43:07.464084    1522 start.go:600] kubectl: 1.27.2, cluster: 1.28.1 (minor skew: 1)
	I0914 14:43:07.467672    1522 out.go:177] * Done! kubectl is now configured to use "addons-388000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Thu 2023-09-14 21:36:36 UTC, ends at Thu 2023-09-14 22:03:11 UTC. --
	Sep 14 21:55:14 addons-388000 dockerd[1156]: time="2023-09-14T21:55:14.667931603Z" level=info msg="ignoring event" container=40cdfd9e591d6d222ceba6780cb17bfc909ac2ecbb55e2501c67b4a00e7499a9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 14 21:55:14 addons-388000 dockerd[1162]: time="2023-09-14T21:55:14.668071849Z" level=info msg="shim disconnected" id=40cdfd9e591d6d222ceba6780cb17bfc909ac2ecbb55e2501c67b4a00e7499a9 namespace=moby
	Sep 14 21:55:14 addons-388000 dockerd[1162]: time="2023-09-14T21:55:14.668117056Z" level=warning msg="cleaning up after shim disconnected" id=40cdfd9e591d6d222ceba6780cb17bfc909ac2ecbb55e2501c67b4a00e7499a9 namespace=moby
	Sep 14 21:55:14 addons-388000 dockerd[1162]: time="2023-09-14T21:55:14.668121222Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 14 21:56:47 addons-388000 dockerd[1162]: time="2023-09-14T21:56:47.096101665Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 14 21:56:47 addons-388000 dockerd[1162]: time="2023-09-14T21:56:47.096128415Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 14 21:56:47 addons-388000 dockerd[1162]: time="2023-09-14T21:56:47.096134790Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 14 21:56:47 addons-388000 dockerd[1162]: time="2023-09-14T21:56:47.096138914Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 14 21:56:47 addons-388000 cri-dockerd[1056]: time="2023-09-14T21:56:47Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2e3fb7677fbbb72b513ff9c738d4b4347a2fe388870c97fd2b8449bb01ea2929/resolv.conf as [nameserver 10.96.0.10 search headlamp.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Sep 14 21:56:47 addons-388000 dockerd[1156]: time="2023-09-14T21:56:47.440443205Z" level=warning msg="reference for unknown type: " digest="sha256:498ea22dc5acadaa4015e7a50335d21fdce45d9e8f1f8adf29c2777da4182f98" remote="ghcr.io/headlamp-k8s/headlamp@sha256:498ea22dc5acadaa4015e7a50335d21fdce45d9e8f1f8adf29c2777da4182f98"
	Sep 14 21:56:52 addons-388000 cri-dockerd[1056]: time="2023-09-14T21:56:52Z" level=info msg="Stop pulling image ghcr.io/headlamp-k8s/headlamp:v0.19.0@sha256:498ea22dc5acadaa4015e7a50335d21fdce45d9e8f1f8adf29c2777da4182f98: Status: Downloaded newer image for ghcr.io/headlamp-k8s/headlamp@sha256:498ea22dc5acadaa4015e7a50335d21fdce45d9e8f1f8adf29c2777da4182f98"
	Sep 14 21:56:52 addons-388000 dockerd[1162]: time="2023-09-14T21:56:52.290106170Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 14 21:56:52 addons-388000 dockerd[1162]: time="2023-09-14T21:56:52.290133544Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 14 21:56:52 addons-388000 dockerd[1162]: time="2023-09-14T21:56:52.290142794Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 14 21:56:52 addons-388000 dockerd[1162]: time="2023-09-14T21:56:52.290162043Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 14 21:57:00 addons-388000 dockerd[1162]: time="2023-09-14T21:57:00.415101798Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 14 21:57:00 addons-388000 dockerd[1162]: time="2023-09-14T21:57:00.415132297Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 14 21:57:00 addons-388000 dockerd[1162]: time="2023-09-14T21:57:00.415325584Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 14 21:57:00 addons-388000 dockerd[1162]: time="2023-09-14T21:57:00.415338375Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 14 21:57:00 addons-388000 cri-dockerd[1056]: time="2023-09-14T21:57:00Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f19d008bf96077bec263ce950e5d45b2ca84f877b5b6a5cc94a2c2393f816d18/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Sep 14 21:57:05 addons-388000 cri-dockerd[1056]: time="2023-09-14T21:57:05Z" level=info msg="Stop pulling image docker.io/nginx:latest: Status: Downloaded newer image for nginx:latest"
	Sep 14 21:57:05 addons-388000 dockerd[1162]: time="2023-09-14T21:57:05.390694560Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 14 21:57:05 addons-388000 dockerd[1162]: time="2023-09-14T21:57:05.390723310Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 14 21:57:05 addons-388000 dockerd[1162]: time="2023-09-14T21:57:05.390927346Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 14 21:57:05 addons-388000 dockerd[1162]: time="2023-09-14T21:57:05.390933554Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID
	0e57ed777e3e7       nginx@sha256:6926dd802f40e5e7257fded83e0d8030039642e4e10c4a98a6478e9c6fe06153                                                                6 minutes ago       Running             task-pv-container                        0                   f19d008bf9607
	ab77278ca5874       ghcr.io/headlamp-k8s/headlamp@sha256:498ea22dc5acadaa4015e7a50335d21fdce45d9e8f1f8adf29c2777da4182f98                                        6 minutes ago       Running             headlamp                                 0                   2e3fb7677fbbb
	c6e7158ec87e6       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          25 minutes ago      Running             csi-snapshotter                          0                   23a9864c5e7a2
	8fbd96f503108       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          25 minutes ago      Running             csi-provisioner                          0                   23a9864c5e7a2
	5a28f3666ec4d       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            25 minutes ago      Running             liveness-probe                           0                   23a9864c5e7a2
	4a515f3dbd90e       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           25 minutes ago      Running             hostpath                                 0                   23a9864c5e7a2
	726bdbe627b06       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf                                 25 minutes ago      Running             gcp-auth                                 0                   039c490b8ce95
	c5e816aa3fb60       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                25 minutes ago      Running             node-driver-registrar                    0                   23a9864c5e7a2
	0574ef72c784a       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              25 minutes ago      Running             csi-resizer                              0                   928188ebbbe5c
	0af4f9c858980       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   25 minutes ago      Running             csi-external-health-monitor-controller   0                   23a9864c5e7a2
	9a3fe3bf72dd7       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             25 minutes ago      Running             csi-attacher                             0                   aec96cfd028be
	1f519e69776da       97e04611ad434                                                                                                                                26 minutes ago      Running             coredns                                  0                   6b82b02e01da4
	c36ca5fc76214       812f5241df7fd                                                                                                                                26 minutes ago      Running             kube-proxy                               0                   24118a5be8efa
	af45960dc2d7c       b4a5a57e99492                                                                                                                                26 minutes ago      Running             kube-scheduler                           0                   6dde63050aa99
	39f78945ed576       b29fb62480892                                                                                                                                26 minutes ago      Running             kube-apiserver                           0                   a02ab403a50ec
	f2717f532e595       8b6e1980b7584                                                                                                                                26 minutes ago      Running             kube-controller-manager                  0                   834af4f99b3bc
	5a63d0e8296f4       9cdd6470f48c8                                                                                                                                26 minutes ago      Running             etcd                                     0                   b2289ff5c077b
	
	* 
	* ==> coredns [1f519e69776d] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	[INFO] Reloading complete
	
	* 
	* ==> describe nodes <==
	* Name:               addons-388000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-388000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d7492f2ae2d9b6e62b385ffcd97ebad62c645e82
	                    minikube.k8s.io/name=addons-388000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_14T14_36_54_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-388000
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-388000"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 14 Sep 2023 21:36:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-388000
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 14 Sep 2023 22:03:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 14 Sep 2023 22:02:37 +0000   Thu, 14 Sep 2023 21:36:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 14 Sep 2023 22:02:37 +0000   Thu, 14 Sep 2023 21:36:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 14 Sep 2023 22:02:37 +0000   Thu, 14 Sep 2023 21:36:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 14 Sep 2023 22:02:37 +0000   Thu, 14 Sep 2023 21:36:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.2
	  Hostname:    addons-388000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	System Info:
	  Machine ID:                 ca8cf67d46214b1fbc59c14cf3d2d66f
	  System UUID:                ca8cf67d46214b1fbc59c14cf3d2d66f
	  Boot ID:                    386c1075-3226-461a-ab43-e16ad465a6c4
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.6
	  Kubelet Version:            v1.28.1
	  Kube-Proxy Version:         v1.28.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     task-pv-pod                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m11s
	  gcp-auth                    gcp-auth-d4c87556c-pjjjl                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25m
	  headlamp                    headlamp-699c48fb74-9lhdj                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m25s
	  kube-system                 coredns-5dd5756b68-psn28                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     26m
	  kube-system                 csi-hostpath-attacher-0                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	  kube-system                 csi-hostpath-resizer-0                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	  kube-system                 csi-hostpathplugin-b5k2m                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	  kube-system                 etcd-addons-388000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         26m
	  kube-system                 kube-apiserver-addons-388000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	  kube-system                 kube-controller-manager-addons-388000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	  kube-system                 kube-proxy-8pbsf                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	  kube-system                 kube-scheduler-addons-388000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 26m   kube-proxy       
	  Normal  Starting                 26m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  26m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  26m   kubelet          Node addons-388000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    26m   kubelet          Node addons-388000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     26m   kubelet          Node addons-388000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                26m   kubelet          Node addons-388000 status is now: NodeReady
	  Normal  RegisteredNode           26m   node-controller  Node addons-388000 event: Registered Node addons-388000 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000000] KASLR disabled due to lack of seed
	[  +0.645091] EINJ: EINJ table not found.
	[  +0.506039] systemd-fstab-generator[117]: Ignoring "noauto" for root device
	[  +0.043466] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000824] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +4.211816] systemd-fstab-generator[481]: Ignoring "noauto" for root device
	[  +0.087452] systemd-fstab-generator[492]: Ignoring "noauto" for root device
	[  +0.529791] systemd-fstab-generator[754]: Ignoring "noauto" for root device
	[  +0.178247] systemd-fstab-generator[791]: Ignoring "noauto" for root device
	[  +0.078699] systemd-fstab-generator[802]: Ignoring "noauto" for root device
	[  +0.082696] systemd-fstab-generator[815]: Ignoring "noauto" for root device
	[  +1.243164] systemd-fstab-generator[973]: Ignoring "noauto" for root device
	[  +0.079535] systemd-fstab-generator[984]: Ignoring "noauto" for root device
	[  +0.081510] systemd-fstab-generator[995]: Ignoring "noauto" for root device
	[  +0.082103] systemd-fstab-generator[1006]: Ignoring "noauto" for root device
	[  +0.084560] systemd-fstab-generator[1049]: Ignoring "noauto" for root device
	[  +2.579762] systemd-fstab-generator[1149]: Ignoring "noauto" for root device
	[  +2.146558] kauditd_printk_skb: 53 callbacks suppressed
	[  +2.177617] systemd-fstab-generator[1466]: Ignoring "noauto" for root device
	[  +5.135787] systemd-fstab-generator[2333]: Ignoring "noauto" for root device
	[Sep14 21:37] kauditd_printk_skb: 41 callbacks suppressed
	[  +1.224924] TCP: eth0: Driver has suspect GRO implementation, TCP performance may be compromised.
	[  +5.069347] kauditd_printk_skb: 47 callbacks suppressed
	[  +8.062309] kauditd_printk_skb: 6 callbacks suppressed
	[  +6.104989] kauditd_printk_skb: 5 callbacks suppressed
	
	* 
	* ==> etcd [5a63d0e8296f] <==
	* {"level":"info","ts":"2023-09-14T21:36:51.511114Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 became leader at term 2"}
	{"level":"info","ts":"2023-09-14T21:36:51.511137Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c46d288d2fcb0590 elected leader c46d288d2fcb0590 at term 2"}
	{"level":"info","ts":"2023-09-14T21:36:51.511827Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"c46d288d2fcb0590","local-member-attributes":"{Name:addons-388000 ClientURLs:[https://192.168.105.2:2379]}","request-path":"/0/members/c46d288d2fcb0590/attributes","cluster-id":"6e03e7863b4f9c54","publish-timeout":"7s"}
	{"level":"info","ts":"2023-09-14T21:36:51.511953Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-14T21:36:51.512015Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-14T21:36:51.512221Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-14T21:36:51.512251Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-09-14T21:36:51.512273Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6e03e7863b4f9c54","local-member-id":"c46d288d2fcb0590","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-14T21:36:51.512322Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-14T21:36:51.512345Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-14T21:36:51.512368Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-14T21:36:51.512743Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.2:2379"}
	{"level":"info","ts":"2023-09-14T21:36:51.513582Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-09-14T21:46:51.09248Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":801}
	{"level":"info","ts":"2023-09-14T21:46:51.094243Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":801,"took":"1.342212ms","hash":1083412012}
	{"level":"info","ts":"2023-09-14T21:46:51.094259Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1083412012,"revision":801,"compact-revision":-1}
	{"level":"info","ts":"2023-09-14T21:51:51.097381Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":951}
	{"level":"info","ts":"2023-09-14T21:51:51.097919Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":951,"took":"269.454µs","hash":1387439011}
	{"level":"info","ts":"2023-09-14T21:51:51.097932Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1387439011,"revision":951,"compact-revision":801}
	{"level":"info","ts":"2023-09-14T21:56:51.100187Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1102}
	{"level":"info","ts":"2023-09-14T21:56:51.100605Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1102,"took":"247.994µs","hash":2959214090}
	{"level":"info","ts":"2023-09-14T21:56:51.10062Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2959214090,"revision":1102,"compact-revision":951}
	{"level":"info","ts":"2023-09-14T22:01:51.104012Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1309}
	{"level":"info","ts":"2023-09-14T22:01:51.104575Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1309,"took":"373.243µs","hash":3004076975}
	{"level":"info","ts":"2023-09-14T22:01:51.104589Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3004076975,"revision":1309,"compact-revision":1102}
	
	* 
	* ==> gcp-auth [726bdbe627b0] <==
	* 2023/09/14 21:37:25 GCP Auth Webhook started!
	2023/09/14 21:56:46 Ready to marshal response ...
	2023/09/14 21:56:46 Ready to write response ...
	2023/09/14 21:56:46 Ready to marshal response ...
	2023/09/14 21:56:46 Ready to write response ...
	2023/09/14 21:56:46 Ready to marshal response ...
	2023/09/14 21:56:46 Ready to write response ...
	2023/09/14 21:57:00 Ready to marshal response ...
	2023/09/14 21:57:00 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  22:03:11 up 26 min,  0 users,  load average: 0.10, 0.23, 0.21
	Linux addons-388000 5.10.57 #1 SMP PREEMPT Wed Sep 13 19:05:16 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [39f78945ed57] <==
	* W0914 21:53:00.844205       1 watcher.go:245] watch chan error: etcdserver: mvcc: required revision has been compacted
	I0914 21:53:51.695037       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0914 21:54:51.695833       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0914 21:55:13.478412       1 controller.go:159] removing "v1beta1.metrics.k8s.io" from AggregationController failed with: resource not found
	E0914 21:55:19.745518       1 handler_proxy.go:137] error resolving kube-system/metrics-server: service "metrics-server" not found
	W0914 21:55:19.745550       1 handler_proxy.go:93] no RequestInfo found in the context
	E0914 21:55:19.745571       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0914 21:55:19.745579       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0914 21:56:19.746684       1 handler_proxy.go:137] error resolving kube-system/metrics-server: service "metrics-server" not found
	W0914 21:56:19.746702       1 handler_proxy.go:93] no RequestInfo found in the context
	E0914 21:56:19.746726       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0914 21:56:19.746731       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0914 21:56:46.705370       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.97.191.186"}
	E0914 21:58:19.747771       1 handler_proxy.go:137] error resolving kube-system/metrics-server: service "metrics-server" not found
	W0914 21:58:19.747825       1 handler_proxy.go:93] no RequestInfo found in the context
	E0914 21:58:19.747852       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0914 21:58:19.747863       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0914 22:02:19.748895       1 handler_proxy.go:137] error resolving kube-system/metrics-server: service "metrics-server" not found
	W0914 22:02:19.748911       1 handler_proxy.go:93] no RequestInfo found in the context
	E0914 22:02:19.748937       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0914 22:02:19.748941       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [f2717f532e59] <==
	* I0914 21:37:25.770929       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="gcp-auth/gcp-auth-d4c87556c" duration="2.310416ms"
	I0914 21:37:25.771746       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="gcp-auth/gcp-auth-d4c87556c" duration="11.791µs"
	I0914 21:37:53.005858       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0914 21:37:53.014644       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0914 21:37:54.003849       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0914 21:37:54.024141       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0914 21:55:13.485728       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-7c66d45ddc" duration="2.166µs"
	I0914 21:56:46.714052       1 event.go:307] "Event occurred" object="headlamp/headlamp" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set headlamp-699c48fb74 to 1"
	I0914 21:56:46.722225       1 event.go:307] "Event occurred" object="headlamp/headlamp-699c48fb74" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"headlamp-699c48fb74-\" is forbidden: error looking up service account headlamp/headlamp: serviceaccount \"headlamp\" not found"
	I0914 21:56:46.724298       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-699c48fb74" duration="6.644453ms"
	E0914 21:56:46.724312       1 replica_set.go:557] sync "headlamp/headlamp-699c48fb74" failed with pods "headlamp-699c48fb74-" is forbidden: error looking up service account headlamp/headlamp: serviceaccount "headlamp" not found
	I0914 21:56:46.728303       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-699c48fb74" duration="3.955148ms"
	E0914 21:56:46.728333       1 replica_set.go:557] sync "headlamp/headlamp-699c48fb74" failed with pods "headlamp-699c48fb74-" is forbidden: error looking up service account headlamp/headlamp: serviceaccount "headlamp" not found
	I0914 21:56:46.728354       1 event.go:307] "Event occurred" object="headlamp/headlamp-699c48fb74" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"headlamp-699c48fb74-\" is forbidden: error looking up service account headlamp/headlamp: serviceaccount \"headlamp\" not found"
	I0914 21:56:46.735153       1 event.go:307] "Event occurred" object="headlamp/headlamp-699c48fb74" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: headlamp-699c48fb74-9lhdj"
	I0914 21:56:46.737941       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-699c48fb74" duration="8.182122ms"
	I0914 21:56:46.759020       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-699c48fb74" duration="21.052915ms"
	I0914 21:56:46.759228       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-699c48fb74" duration="15.25µs"
	I0914 21:56:46.768831       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-699c48fb74" duration="28.957µs"
	I0914 21:56:52.626631       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-699c48fb74" duration="53.29µs"
	I0914 21:56:52.641632       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-699c48fb74" duration="3.462287ms"
	I0914 21:56:52.641923       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-699c48fb74" duration="12.541µs"
	I0914 21:56:57.882660       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0914 21:56:57.882836       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0914 21:56:59.545001       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	
	* 
	* ==> kube-proxy [c36ca5fc7621] <==
	* I0914 21:37:08.522854       1 server_others.go:69] "Using iptables proxy"
	I0914 21:37:08.529066       1 node.go:141] Successfully retrieved node IP: 192.168.105.2
	I0914 21:37:08.587870       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0914 21:37:08.587883       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0914 21:37:08.588459       1 server_others.go:152] "Using iptables Proxier"
	I0914 21:37:08.588486       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0914 21:37:08.588572       1 server.go:846] "Version info" version="v1.28.1"
	I0914 21:37:08.588578       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 21:37:08.589296       1 config.go:188] "Starting service config controller"
	I0914 21:37:08.589305       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0914 21:37:08.589315       1 config.go:97] "Starting endpoint slice config controller"
	I0914 21:37:08.589317       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0914 21:37:08.589522       1 config.go:315] "Starting node config controller"
	I0914 21:37:08.589524       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0914 21:37:08.690794       1 shared_informer.go:318] Caches are synced for node config
	I0914 21:37:08.690821       1 shared_informer.go:318] Caches are synced for service config
	I0914 21:37:08.690838       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [af45960dc2d7] <==
	* E0914 21:36:52.199210       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0914 21:36:52.199206       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0914 21:36:52.199236       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0914 21:36:52.199258       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0914 21:36:52.199265       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0914 21:36:52.199278       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0914 21:36:52.199281       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0914 21:36:52.199189       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0914 21:36:52.199288       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0914 21:36:52.199260       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0914 21:36:52.199247       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0914 21:36:52.199323       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0914 21:36:53.095318       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0914 21:36:53.095337       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0914 21:36:53.142146       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0914 21:36:53.142164       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0914 21:36:53.158912       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0914 21:36:53.159021       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0914 21:36:53.162940       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0914 21:36:53.163031       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0914 21:36:53.206403       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0914 21:36:53.206481       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0914 21:36:53.209535       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0914 21:36:53.209549       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0914 21:36:53.797539       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Thu 2023-09-14 21:36:36 UTC, ends at Thu 2023-09-14 22:03:11 UTC. --
	Sep 14 21:57:54 addons-388000 kubelet[2339]: E0914 21:57:54.524836    2339 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 14 21:57:54 addons-388000 kubelet[2339]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 14 21:57:54 addons-388000 kubelet[2339]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 14 21:57:54 addons-388000 kubelet[2339]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 14 21:58:54 addons-388000 kubelet[2339]: E0914 21:58:54.524887    2339 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 14 21:58:54 addons-388000 kubelet[2339]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 14 21:58:54 addons-388000 kubelet[2339]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 14 21:58:54 addons-388000 kubelet[2339]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 14 21:59:54 addons-388000 kubelet[2339]: E0914 21:59:54.525033    2339 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 14 21:59:54 addons-388000 kubelet[2339]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 14 21:59:54 addons-388000 kubelet[2339]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 14 21:59:54 addons-388000 kubelet[2339]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 14 22:00:54 addons-388000 kubelet[2339]: E0914 22:00:54.524416    2339 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 14 22:00:54 addons-388000 kubelet[2339]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 14 22:00:54 addons-388000 kubelet[2339]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 14 22:00:54 addons-388000 kubelet[2339]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 14 22:01:54 addons-388000 kubelet[2339]: E0914 22:01:54.525398    2339 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 14 22:01:54 addons-388000 kubelet[2339]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 14 22:01:54 addons-388000 kubelet[2339]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 14 22:01:54 addons-388000 kubelet[2339]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 14 22:01:54 addons-388000 kubelet[2339]: W0914 22:01:54.547125    2339 machine.go:65] Cannot read vendor id correctly, set empty.
	Sep 14 22:02:54 addons-388000 kubelet[2339]: E0914 22:02:54.524680    2339 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 14 22:02:54 addons-388000 kubelet[2339]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 14 22:02:54 addons-388000 kubelet[2339]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 14 22:02:54 addons-388000 kubelet[2339]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p addons-388000 -n addons-388000
helpers_test.go:261: (dbg) Run:  kubectl --context addons-388000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/CSI FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/CSI (374.09s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (818.09s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:831: failed waiting for cloud-spanner-emulator deployment to stabilize: timed out waiting for the condition
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
addons_test.go:833: ***** TestAddons/parallel/CloudSpanner: pod "app=cloud-spanner-emulator" failed to start within 6m0s: context deadline exceeded ****
addons_test.go:833: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p addons-388000 -n addons-388000
addons_test.go:833: TestAddons/parallel/CloudSpanner: showing logs for failed pods as of 2023-09-14 14:55:07.566964 -0700 PDT m=+1174.754156626
addons_test.go:834: failed waiting for app=cloud-spanner-emulator pod: app=cloud-spanner-emulator within 6m0s: context deadline exceeded
addons_test.go:836: (dbg) Run:  out/minikube-darwin-arm64 addons disable cloud-spanner -p addons-388000
addons_test.go:836: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable cloud-spanner -p addons-388000: exit status 10 (1m37.253032084s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE: disable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl delete --force --ignore-not-found -f /etc/kubernetes/addons/deployment.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: the path "/etc/kubernetes/addons/deployment.yaml" does not exist
	]
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:837: failed to disable cloud-spanner addon: args "out/minikube-darwin-arm64 addons disable cloud-spanner -p addons-388000" : exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p addons-388000 -n addons-388000
helpers_test.go:244: <<< TestAddons/parallel/CloudSpanner FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/CloudSpanner]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p addons-388000 logs -n 25
helpers_test.go:252: TestAddons/parallel/CloudSpanner logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-917000 | jenkins | v1.31.2 | 14 Sep 23 14:35 PDT |                     |
	|         | -p download-only-917000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| start   | -o=json --download-only        | download-only-917000 | jenkins | v1.31.2 | 14 Sep 23 14:35 PDT |                     |
	|         | -p download-only-917000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.31.2 | 14 Sep 23 14:36 PDT | 14 Sep 23 14:36 PDT |
	| delete  | -p download-only-917000        | download-only-917000 | jenkins | v1.31.2 | 14 Sep 23 14:36 PDT | 14 Sep 23 14:36 PDT |
	| delete  | -p download-only-917000        | download-only-917000 | jenkins | v1.31.2 | 14 Sep 23 14:36 PDT | 14 Sep 23 14:36 PDT |
	| start   | --download-only -p             | binary-mirror-231000 | jenkins | v1.31.2 | 14 Sep 23 14:36 PDT |                     |
	|         | binary-mirror-231000           |                      |         |         |                     |                     |
	|         | --alsologtostderr              |                      |         |         |                     |                     |
	|         | --binary-mirror                |                      |         |         |                     |                     |
	|         | http://127.0.0.1:49379         |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-231000        | binary-mirror-231000 | jenkins | v1.31.2 | 14 Sep 23 14:36 PDT | 14 Sep 23 14:36 PDT |
	| start   | -p addons-388000               | addons-388000        | jenkins | v1.31.2 | 14 Sep 23 14:36 PDT | 14 Sep 23 14:43 PDT |
	|         | --wait=true --memory=4000      |                      |         |         |                     |                     |
	|         | --alsologtostderr              |                      |         |         |                     |                     |
	|         | --addons=registry              |                      |         |         |                     |                     |
	|         | --addons=metrics-server        |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots       |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver   |                      |         |         |                     |                     |
	|         | --addons=gcp-auth              |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner         |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget      |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|         | --addons=ingress               |                      |         |         |                     |                     |
	|         | --addons=ingress-dns           |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p       | addons-388000        | jenkins | v1.31.2 | 14 Sep 23 14:55 PDT |                     |
	|         | addons-388000                  |                      |         |         |                     |                     |
	| addons  | addons-388000 addons           | addons-388000        | jenkins | v1.31.2 | 14 Sep 23 14:55 PDT | 14 Sep 23 14:55 PDT |
	|         | disable metrics-server         |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/14 14:36:23
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.21.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0914 14:36:23.572515    1522 out.go:296] Setting OutFile to fd 1 ...
	I0914 14:36:23.572636    1522 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 14:36:23.572639    1522 out.go:309] Setting ErrFile to fd 2...
	I0914 14:36:23.572642    1522 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 14:36:23.572752    1522 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17243-1006/.minikube/bin
	I0914 14:36:23.573756    1522 out.go:303] Setting JSON to false
	I0914 14:36:23.588610    1522 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":357,"bootTime":1694727026,"procs":391,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0914 14:36:23.588683    1522 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0914 14:36:23.593630    1522 out.go:177] * [addons-388000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0914 14:36:23.600459    1522 out.go:177]   - MINIKUBE_LOCATION=17243
	I0914 14:36:23.600497    1522 notify.go:220] Checking for updates...
	I0914 14:36:23.603591    1522 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17243-1006/kubeconfig
	I0914 14:36:23.606425    1522 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 14:36:23.609496    1522 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 14:36:23.612541    1522 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17243-1006/.minikube
	I0914 14:36:23.615423    1522 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 14:36:23.618648    1522 driver.go:373] Setting default libvirt URI to qemu:///system
	I0914 14:36:23.622479    1522 out.go:177] * Using the qemu2 driver based on user configuration
	I0914 14:36:23.629482    1522 start.go:298] selected driver: qemu2
	I0914 14:36:23.629487    1522 start.go:902] validating driver "qemu2" against <nil>
	I0914 14:36:23.629493    1522 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 14:36:23.631382    1522 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0914 14:36:23.634542    1522 out.go:177] * Automatically selected the socket_vmnet network
	I0914 14:36:23.637548    1522 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 14:36:23.637570    1522 cni.go:84] Creating CNI manager for ""
	I0914 14:36:23.637578    1522 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0914 14:36:23.637583    1522 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0914 14:36:23.637590    1522 start_flags.go:321] config:
	{Name:addons-388000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:addons-388000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID
:0 AutoPauseInterval:1m0s}
	I0914 14:36:23.641729    1522 iso.go:125] acquiring lock: {Name:mkd5b68b252c18146a825ff5365883948b3c6983 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 14:36:23.649492    1522 out.go:177] * Starting control plane node addons-388000 in cluster addons-388000
	I0914 14:36:23.653459    1522 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0914 14:36:23.653478    1522 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0914 14:36:23.653492    1522 cache.go:57] Caching tarball of preloaded images
	I0914 14:36:23.653557    1522 preload.go:174] Found /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0914 14:36:23.653564    1522 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0914 14:36:23.653811    1522 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/config.json ...
	I0914 14:36:23.653825    1522 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/config.json: {Name:mk9010c5dfb0ad4a966bb29118112217ba3b6cf7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 14:36:23.654041    1522 start.go:365] acquiring machines lock for addons-388000: {Name:mk831f2bd5a9a0c932c8a59972319d68edb53d07 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 14:36:23.654147    1522 start.go:369] acquired machines lock for "addons-388000" in 99.875µs
	I0914 14:36:23.654159    1522 start.go:93] Provisioning new machine with config: &{Name:addons-388000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.1 ClusterName:addons-388000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 14:36:23.654194    1522 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 14:36:23.662516    1522 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0914 14:36:23.982709    1522 start.go:159] libmachine.API.Create for "addons-388000" (driver="qemu2")
	I0914 14:36:23.982756    1522 client.go:168] LocalClient.Create starting
	I0914 14:36:23.982899    1522 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/ca.pem
	I0914 14:36:24.329911    1522 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/cert.pem
	I0914 14:36:24.425142    1522 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17243-1006/.minikube/cache/iso/arm64/minikube-v1.31.0-1694625400-17243-arm64.iso...
	I0914 14:36:24.784281    1522 main.go:141] libmachine: Creating SSH key...
	I0914 14:36:25.013863    1522 main.go:141] libmachine: Creating Disk image...
	I0914 14:36:25.013874    1522 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 14:36:25.014143    1522 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/disk.qcow2.raw /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/disk.qcow2
	I0914 14:36:25.048599    1522 main.go:141] libmachine: STDOUT: 
	I0914 14:36:25.048634    1522 main.go:141] libmachine: STDERR: 
	I0914 14:36:25.048701    1522 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/disk.qcow2 +20000M
	I0914 14:36:25.056105    1522 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 14:36:25.056122    1522 main.go:141] libmachine: STDERR: 
	I0914 14:36:25.056141    1522 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/disk.qcow2
	I0914 14:36:25.056150    1522 main.go:141] libmachine: Starting QEMU VM...
	I0914 14:36:25.056194    1522 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:ab:b1:c2:6f:25 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/disk.qcow2
	I0914 14:36:25.122275    1522 main.go:141] libmachine: STDOUT: 
	I0914 14:36:25.122322    1522 main.go:141] libmachine: STDERR: 
	I0914 14:36:25.122327    1522 main.go:141] libmachine: Attempt 0
	I0914 14:36:25.122346    1522 main.go:141] libmachine: Searching for fa:ab:b1:c2:6f:25 in /var/db/dhcpd_leases ...
	I0914 14:36:27.123500    1522 main.go:141] libmachine: Attempt 1
	I0914 14:36:27.123581    1522 main.go:141] libmachine: Searching for fa:ab:b1:c2:6f:25 in /var/db/dhcpd_leases ...
	I0914 14:36:29.124764    1522 main.go:141] libmachine: Attempt 2
	I0914 14:36:29.124788    1522 main.go:141] libmachine: Searching for fa:ab:b1:c2:6f:25 in /var/db/dhcpd_leases ...
	I0914 14:36:31.125900    1522 main.go:141] libmachine: Attempt 3
	I0914 14:36:31.125919    1522 main.go:141] libmachine: Searching for fa:ab:b1:c2:6f:25 in /var/db/dhcpd_leases ...
	I0914 14:36:33.126934    1522 main.go:141] libmachine: Attempt 4
	I0914 14:36:33.126945    1522 main.go:141] libmachine: Searching for fa:ab:b1:c2:6f:25 in /var/db/dhcpd_leases ...
	I0914 14:36:35.127988    1522 main.go:141] libmachine: Attempt 5
	I0914 14:36:35.128006    1522 main.go:141] libmachine: Searching for fa:ab:b1:c2:6f:25 in /var/db/dhcpd_leases ...
	I0914 14:36:37.130061    1522 main.go:141] libmachine: Attempt 6
	I0914 14:36:37.130089    1522 main.go:141] libmachine: Searching for fa:ab:b1:c2:6f:25 in /var/db/dhcpd_leases ...
	I0914 14:36:37.130226    1522 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0914 14:36:37.130272    1522 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:fa:ab:b1:c2:6f:25 ID:1,fa:ab:b1:c2:6f:25 Lease:0x6504ce64}
	I0914 14:36:37.130284    1522 main.go:141] libmachine: Found match: fa:ab:b1:c2:6f:25
	I0914 14:36:37.130296    1522 main.go:141] libmachine: IP: 192.168.105.2
	I0914 14:36:37.130304    1522 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.2)...
	I0914 14:36:39.152264    1522 machine.go:88] provisioning docker machine ...
	I0914 14:36:39.152328    1522 buildroot.go:166] provisioning hostname "addons-388000"
	I0914 14:36:39.153898    1522 main.go:141] libmachine: Using SSH client type: native
	I0914 14:36:39.154765    1522 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102894760] 0x102896ed0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0914 14:36:39.154789    1522 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-388000 && echo "addons-388000" | sudo tee /etc/hostname
	I0914 14:36:39.254406    1522 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-388000
	
	I0914 14:36:39.254547    1522 main.go:141] libmachine: Using SSH client type: native
	I0914 14:36:39.254974    1522 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102894760] 0x102896ed0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0914 14:36:39.254987    1522 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-388000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-388000/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-388000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 14:36:39.336783    1522 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 14:36:39.336807    1522 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17243-1006/.minikube CaCertPath:/Users/jenkins/minikube-integration/17243-1006/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17243-1006/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17243-1006/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17243-1006/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17243-1006/.minikube}
	I0914 14:36:39.336834    1522 buildroot.go:174] setting up certificates
	I0914 14:36:39.336842    1522 provision.go:83] configureAuth start
	I0914 14:36:39.336850    1522 provision.go:138] copyHostCerts
	I0914 14:36:39.337062    1522 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17243-1006/.minikube/ca.pem (1082 bytes)
	I0914 14:36:39.337458    1522 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17243-1006/.minikube/cert.pem (1123 bytes)
	I0914 14:36:39.337624    1522 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17243-1006/.minikube/key.pem (1675 bytes)
	I0914 14:36:39.337823    1522 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17243-1006/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17243-1006/.minikube/certs/ca-key.pem org=jenkins.addons-388000 san=[192.168.105.2 192.168.105.2 localhost 127.0.0.1 minikube addons-388000]
	I0914 14:36:39.438902    1522 provision.go:172] copyRemoteCerts
	I0914 14:36:39.438967    1522 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 14:36:39.438977    1522 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/id_rsa Username:docker}
	I0914 14:36:39.475382    1522 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0914 14:36:39.482935    1522 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0914 14:36:39.490611    1522 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0914 14:36:39.498058    1522 provision.go:86] duration metric: configureAuth took 161.21375ms
	I0914 14:36:39.498072    1522 buildroot.go:189] setting minikube options for container-runtime
	I0914 14:36:39.498194    1522 config.go:182] Loaded profile config "addons-388000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0914 14:36:39.498238    1522 main.go:141] libmachine: Using SSH client type: native
	I0914 14:36:39.498454    1522 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102894760] 0x102896ed0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0914 14:36:39.498461    1522 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0914 14:36:39.568371    1522 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0914 14:36:39.568380    1522 buildroot.go:70] root file system type: tmpfs
	I0914 14:36:39.568444    1522 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0914 14:36:39.568493    1522 main.go:141] libmachine: Using SSH client type: native
	I0914 14:36:39.568758    1522 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102894760] 0x102896ed0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0914 14:36:39.568795    1522 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0914 14:36:39.642658    1522 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0914 14:36:39.642714    1522 main.go:141] libmachine: Using SSH client type: native
	I0914 14:36:39.642984    1522 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102894760] 0x102896ed0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0914 14:36:39.642994    1522 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0914 14:36:40.018079    1522 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0914 14:36:40.018095    1522 machine.go:91] provisioned docker machine in 865.825208ms
	I0914 14:36:40.018101    1522 client.go:171] LocalClient.Create took 16.035747292s
	I0914 14:36:40.018112    1522 start.go:167] duration metric: libmachine.API.Create for "addons-388000" took 16.035815708s
	I0914 14:36:40.018117    1522 start.go:300] post-start starting for "addons-388000" (driver="qemu2")
	I0914 14:36:40.018121    1522 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 14:36:40.018186    1522 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 14:36:40.018197    1522 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/id_rsa Username:docker}
	I0914 14:36:40.056512    1522 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 14:36:40.057796    1522 info.go:137] Remote host: Buildroot 2021.02.12
	I0914 14:36:40.057807    1522 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17243-1006/.minikube/addons for local assets ...
	I0914 14:36:40.057875    1522 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17243-1006/.minikube/files for local assets ...
	I0914 14:36:40.057901    1522 start.go:303] post-start completed in 39.782666ms
	I0914 14:36:40.058218    1522 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/config.json ...
	I0914 14:36:40.058366    1522 start.go:128] duration metric: createHost completed in 16.404584042s
	I0914 14:36:40.058389    1522 main.go:141] libmachine: Using SSH client type: native
	I0914 14:36:40.058608    1522 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102894760] 0x102896ed0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0914 14:36:40.058612    1522 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0914 14:36:40.126242    1522 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694727400.596628044
	
	I0914 14:36:40.126252    1522 fix.go:206] guest clock: 1694727400.596628044
	I0914 14:36:40.126256    1522 fix.go:219] Guest: 2023-09-14 14:36:40.596628044 -0700 PDT Remote: 2023-09-14 14:36:40.058369 -0700 PDT m=+16.505601626 (delta=538.259044ms)
	I0914 14:36:40.126267    1522 fix.go:190] guest clock delta is within tolerance: 538.259044ms
	I0914 14:36:40.126272    1522 start.go:83] releasing machines lock for "addons-388000", held for 16.472537s
	I0914 14:36:40.126627    1522 ssh_runner.go:195] Run: cat /version.json
	I0914 14:36:40.126630    1522 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 14:36:40.126636    1522 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/id_rsa Username:docker}
	I0914 14:36:40.126680    1522 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/id_rsa Username:docker}
	I0914 14:36:40.164117    1522 ssh_runner.go:195] Run: systemctl --version
	I0914 14:36:40.279852    1522 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 14:36:40.282756    1522 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 14:36:40.282802    1522 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 14:36:40.290141    1522 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 14:36:40.290164    1522 start.go:469] detecting cgroup driver to use...
	I0914 14:36:40.290325    1522 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 14:36:40.298242    1522 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0914 14:36:40.302485    1522 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0914 14:36:40.306314    1522 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0914 14:36:40.306335    1522 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0914 14:36:40.309906    1522 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0914 14:36:40.313708    1522 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0914 14:36:40.317003    1522 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0914 14:36:40.319988    1522 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 14:36:40.323114    1522 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0914 14:36:40.326593    1522 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 14:36:40.329687    1522 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 14:36:40.332474    1522 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 14:36:40.414020    1522 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0914 14:36:40.421074    1522 start.go:469] detecting cgroup driver to use...
	I0914 14:36:40.421134    1522 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0914 14:36:40.426647    1522 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 14:36:40.431508    1522 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 14:36:40.437031    1522 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 14:36:40.441206    1522 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0914 14:36:40.445778    1522 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0914 14:36:40.494559    1522 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0914 14:36:40.500245    1522 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 14:36:40.506085    1522 ssh_runner.go:195] Run: which cri-dockerd
	I0914 14:36:40.507323    1522 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0914 14:36:40.510306    1522 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0914 14:36:40.515235    1522 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0914 14:36:40.590641    1522 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0914 14:36:40.670685    1522 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0914 14:36:40.670697    1522 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0914 14:36:40.676022    1522 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 14:36:40.753642    1522 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0914 14:36:41.915654    1522 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.162025209s)
	I0914 14:36:41.915719    1522 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0914 14:36:41.996165    1522 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0914 14:36:42.077673    1522 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0914 14:36:42.158787    1522 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 14:36:42.238393    1522 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0914 14:36:42.246223    1522 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 14:36:42.322653    1522 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0914 14:36:42.347035    1522 start.go:516] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0914 14:36:42.347147    1522 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0914 14:36:42.349276    1522 start.go:537] Will wait 60s for crictl version
	I0914 14:36:42.349310    1522 ssh_runner.go:195] Run: which crictl
	I0914 14:36:42.350645    1522 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 14:36:42.367912    1522 start.go:553] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.6
	RuntimeApiVersion:  v1alpha2
	I0914 14:36:42.367994    1522 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0914 14:36:42.377957    1522 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0914 14:36:42.394599    1522 out.go:204] * Preparing Kubernetes v1.28.1 on Docker 24.0.6 ...
	I0914 14:36:42.394744    1522 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0914 14:36:42.396150    1522 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 14:36:42.399678    1522 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0914 14:36:42.399720    1522 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0914 14:36:42.404754    1522 docker.go:636] Got preloaded images: 
	I0914 14:36:42.404761    1522 docker.go:642] registry.k8s.io/kube-apiserver:v1.28.1 wasn't preloaded
	I0914 14:36:42.404801    1522 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0914 14:36:42.407644    1522 ssh_runner.go:195] Run: which lz4
	I0914 14:36:42.408926    1522 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0914 14:36:42.410207    1522 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0914 14:36:42.410221    1522 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (356902558 bytes)
	I0914 14:36:43.758723    1522 docker.go:600] Took 1.349866 seconds to copy over tarball
	I0914 14:36:43.758788    1522 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0914 14:36:44.802481    1522 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.043706042s)
	I0914 14:36:44.802494    1522 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0914 14:36:44.818862    1522 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0914 14:36:44.822486    1522 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0914 14:36:44.827997    1522 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 14:36:44.904406    1522 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0914 14:36:47.070320    1522 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.165952375s)
	I0914 14:36:47.070426    1522 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0914 14:36:47.076673    1522 docker.go:636] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.1
	registry.k8s.io/kube-scheduler:v1.28.1
	registry.k8s.io/kube-proxy:v1.28.1
	registry.k8s.io/kube-controller-manager:v1.28.1
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0914 14:36:47.076684    1522 cache_images.go:84] Images are preloaded, skipping loading
	I0914 14:36:47.076750    1522 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0914 14:36:47.084410    1522 cni.go:84] Creating CNI manager for ""
	I0914 14:36:47.084420    1522 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0914 14:36:47.084443    1522 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0914 14:36:47.084452    1522 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.2 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-388000 NodeName:addons-388000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 14:36:47.084527    1522 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-388000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 14:36:47.084571    1522 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=addons-388000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:addons-388000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0914 14:36:47.084633    1522 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0914 14:36:47.087471    1522 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 14:36:47.087501    1522 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 14:36:47.090481    1522 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I0914 14:36:47.095702    1522 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 14:36:47.100584    1522 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2099 bytes)
	I0914 14:36:47.105532    1522 ssh_runner.go:195] Run: grep 192.168.105.2	control-plane.minikube.internal$ /etc/hosts
	I0914 14:36:47.106963    1522 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 14:36:47.110892    1522 certs.go:56] Setting up /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000 for IP: 192.168.105.2
	I0914 14:36:47.110903    1522 certs.go:190] acquiring lock for shared ca certs: {Name:mkd19d6e2143685b57ba1e0d43c4081bbdb26a79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 14:36:47.111053    1522 certs.go:204] generating minikubeCA CA: /Users/jenkins/minikube-integration/17243-1006/.minikube/ca.key
	I0914 14:36:47.228830    1522 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17243-1006/.minikube/ca.crt ...
	I0914 14:36:47.228840    1522 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17243-1006/.minikube/ca.crt: {Name:mk1c10f9290e336c983838c8c09bb8cd18a9a4c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 14:36:47.229095    1522 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17243-1006/.minikube/ca.key ...
	I0914 14:36:47.229099    1522 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17243-1006/.minikube/ca.key: {Name:mkbc669c78b9b93a07aa566669e7e92430fec9e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 14:36:47.229219    1522 certs.go:204] generating proxyClientCA CA: /Users/jenkins/minikube-integration/17243-1006/.minikube/proxy-client-ca.key
	I0914 14:36:47.333428    1522 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17243-1006/.minikube/proxy-client-ca.crt ...
	I0914 14:36:47.333432    1522 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17243-1006/.minikube/proxy-client-ca.crt: {Name:mk85d65dc023d08a0f4cb19cc395e69f12c9ed1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 14:36:47.333577    1522 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17243-1006/.minikube/proxy-client-ca.key ...
	I0914 14:36:47.333579    1522 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17243-1006/.minikube/proxy-client-ca.key: {Name:mk62bc08bafeee956e88b9480bac37c2df91bf30 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 14:36:47.333721    1522 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/client.key
	I0914 14:36:47.333730    1522 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/client.crt with IP's: []
	I0914 14:36:47.598337    1522 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/client.crt ...
	I0914 14:36:47.598352    1522 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/client.crt: {Name:mk8ecd4e838807718c7ef97bafd599d3b7fd1a4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 14:36:47.598702    1522 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/client.key ...
	I0914 14:36:47.598710    1522 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/client.key: {Name:mk3960bc5fb536243466f07f9f23680cfa92d826 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 14:36:47.598826    1522 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/apiserver.key.96055969
	I0914 14:36:47.598838    1522 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/apiserver.crt.96055969 with IP's: [192.168.105.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0914 14:36:47.656638    1522 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/apiserver.crt.96055969 ...
	I0914 14:36:47.656642    1522 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/apiserver.crt.96055969: {Name:mk3691ba24392ca70b8d7adb6c837bd5b52dfeeb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 14:36:47.656789    1522 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/apiserver.key.96055969 ...
	I0914 14:36:47.656792    1522 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/apiserver.key.96055969: {Name:mk7619af569a08784491e3a0055c754ead430eca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 14:36:47.656913    1522 certs.go:337] copying /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/apiserver.crt.96055969 -> /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/apiserver.crt
	I0914 14:36:47.657047    1522 certs.go:341] copying /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/apiserver.key.96055969 -> /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/apiserver.key
	I0914 14:36:47.657134    1522 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/proxy-client.key
	I0914 14:36:47.657146    1522 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/proxy-client.crt with IP's: []
	I0914 14:36:47.715161    1522 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/proxy-client.crt ...
	I0914 14:36:47.715165    1522 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/proxy-client.crt: {Name:mk5c5221c842b768f8e9ba880dc08acd610bf8dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 14:36:47.715298    1522 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/proxy-client.key ...
	I0914 14:36:47.715301    1522 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/proxy-client.key: {Name:mk620ca3f197a51ffd017e6711b4bab26fb15d79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 14:36:47.715560    1522 certs.go:437] found cert: /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/Users/jenkins/minikube-integration/17243-1006/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 14:36:47.715594    1522 certs.go:437] found cert: /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/Users/jenkins/minikube-integration/17243-1006/.minikube/certs/ca.pem (1082 bytes)
	I0914 14:36:47.715621    1522 certs.go:437] found cert: /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/Users/jenkins/minikube-integration/17243-1006/.minikube/certs/cert.pem (1123 bytes)
	I0914 14:36:47.715645    1522 certs.go:437] found cert: /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/Users/jenkins/minikube-integration/17243-1006/.minikube/certs/key.pem (1675 bytes)
	I0914 14:36:47.716027    1522 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0914 14:36:47.723894    1522 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0914 14:36:47.731037    1522 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 14:36:47.738379    1522 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0914 14:36:47.745927    1522 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17243-1006/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 14:36:47.752925    1522 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17243-1006/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0914 14:36:47.759542    1522 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17243-1006/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 14:36:47.766602    1522 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17243-1006/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0914 14:36:47.773763    1522 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17243-1006/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 14:36:47.780697    1522 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 14:36:47.786484    1522 ssh_runner.go:195] Run: openssl version
	I0914 14:36:47.788649    1522 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 14:36:47.791615    1522 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 14:36:47.793075    1522 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 14 21:36 /usr/share/ca-certificates/minikubeCA.pem
	I0914 14:36:47.793092    1522 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 14:36:47.794978    1522 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 14:36:47.798423    1522 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0914 14:36:47.799931    1522 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0914 14:36:47.799971    1522 kubeadm.go:404] StartCluster: {Name:addons-388000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.1 ClusterName:addons-388000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 14:36:47.800034    1522 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0914 14:36:47.805504    1522 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 14:36:47.808480    1522 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 14:36:47.811111    1522 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 14:36:47.814398    1522 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 14:36:47.814412    1522 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0914 14:36:47.835210    1522 kubeadm.go:322] [init] Using Kubernetes version: v1.28.1
	I0914 14:36:47.835254    1522 kubeadm.go:322] [preflight] Running pre-flight checks
	I0914 14:36:47.889698    1522 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 14:36:47.889750    1522 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 14:36:47.889794    1522 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0914 14:36:47.952261    1522 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 14:36:47.962464    1522 out.go:204]   - Generating certificates and keys ...
	I0914 14:36:47.962497    1522 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0914 14:36:47.962525    1522 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0914 14:36:48.025951    1522 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0914 14:36:48.134925    1522 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0914 14:36:48.186988    1522 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0914 14:36:48.299178    1522 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0914 14:36:48.429498    1522 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0914 14:36:48.429557    1522 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-388000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0914 14:36:48.510620    1522 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0914 14:36:48.510686    1522 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-388000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0914 14:36:48.631510    1522 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0914 14:36:48.668002    1522 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0914 14:36:48.726941    1522 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0914 14:36:48.726969    1522 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 14:36:48.823035    1522 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 14:36:48.918005    1522 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 14:36:49.052610    1522 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 14:36:49.136045    1522 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 14:36:49.136292    1522 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 14:36:49.138218    1522 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 14:36:49.141449    1522 out.go:204]   - Booting up control plane ...
	I0914 14:36:49.141518    1522 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 14:36:49.141563    1522 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 14:36:49.141596    1522 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 14:36:49.146098    1522 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 14:36:49.146527    1522 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 14:36:49.146584    1522 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0914 14:36:49.235726    1522 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0914 14:36:53.234480    1522 kubeadm.go:322] [apiclient] All control plane components are healthy after 4.002199 seconds
	I0914 14:36:53.234548    1522 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0914 14:36:53.240692    1522 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0914 14:36:53.748795    1522 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0914 14:36:53.748894    1522 kubeadm.go:322] [mark-control-plane] Marking the node addons-388000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0914 14:36:54.253997    1522 kubeadm.go:322] [bootstrap-token] Using token: v43sey.bixdamecwwaf1quf
	I0914 14:36:54.261418    1522 out.go:204]   - Configuring RBAC rules ...
	I0914 14:36:54.261475    1522 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0914 14:36:54.262616    1522 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0914 14:36:54.269041    1522 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0914 14:36:54.270041    1522 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0914 14:36:54.271028    1522 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0914 14:36:54.272209    1522 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0914 14:36:54.276273    1522 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0914 14:36:54.432396    1522 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0914 14:36:54.665469    1522 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0914 14:36:54.665894    1522 kubeadm.go:322] 
	I0914 14:36:54.665937    1522 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0914 14:36:54.665940    1522 kubeadm.go:322] 
	I0914 14:36:54.665992    1522 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0914 14:36:54.665996    1522 kubeadm.go:322] 
	I0914 14:36:54.666008    1522 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0914 14:36:54.666036    1522 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0914 14:36:54.666071    1522 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0914 14:36:54.666074    1522 kubeadm.go:322] 
	I0914 14:36:54.666099    1522 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0914 14:36:54.666101    1522 kubeadm.go:322] 
	I0914 14:36:54.666123    1522 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0914 14:36:54.666126    1522 kubeadm.go:322] 
	I0914 14:36:54.666148    1522 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0914 14:36:54.666182    1522 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0914 14:36:54.666217    1522 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0914 14:36:54.666220    1522 kubeadm.go:322] 
	I0914 14:36:54.666261    1522 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0914 14:36:54.666306    1522 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0914 14:36:54.666308    1522 kubeadm.go:322] 
	I0914 14:36:54.666396    1522 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token v43sey.bixdamecwwaf1quf \
	I0914 14:36:54.666457    1522 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:64a56fc8b6798e154cd69d815313de8f6cddb906dc6093ac20ba940ac7c1d871 \
	I0914 14:36:54.666472    1522 kubeadm.go:322] 	--control-plane 
	I0914 14:36:54.666475    1522 kubeadm.go:322] 
	I0914 14:36:54.666513    1522 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0914 14:36:54.666517    1522 kubeadm.go:322] 
	I0914 14:36:54.666553    1522 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token v43sey.bixdamecwwaf1quf \
	I0914 14:36:54.666621    1522 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:64a56fc8b6798e154cd69d815313de8f6cddb906dc6093ac20ba940ac7c1d871 
	I0914 14:36:54.666672    1522 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 14:36:54.666677    1522 cni.go:84] Creating CNI manager for ""
	I0914 14:36:54.666685    1522 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0914 14:36:54.674398    1522 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 14:36:54.677531    1522 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 14:36:54.681843    1522 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0914 14:36:54.686762    1522 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 14:36:54.686820    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:36:54.686837    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=d7492f2ae2d9b6e62b385ffcd97ebad62c645e82 minikube.k8s.io/name=addons-388000 minikube.k8s.io/updated_at=2023_09_14T14_36_54_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:36:54.745761    1522 ops.go:34] apiserver oom_adj: -16
	I0914 14:36:54.751811    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:36:54.783862    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:36:55.319135    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:36:55.819146    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:36:56.319044    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:36:56.817396    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:36:57.317676    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:36:57.819036    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:36:58.319007    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:36:58.819025    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:36:59.318963    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:36:59.819032    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:37:00.318959    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:37:00.819007    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:37:01.318925    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:37:01.819004    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:37:02.318900    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:37:02.818938    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:37:03.318896    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:37:03.818843    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:37:04.318914    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:37:04.818824    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:37:05.318789    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:37:05.818890    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:37:06.318784    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:37:06.818791    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:37:07.318787    1522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 14:37:07.357143    1522 kubeadm.go:1081] duration metric: took 12.670689708s to wait for elevateKubeSystemPrivileges.
	I0914 14:37:07.357158    1522 kubeadm.go:406] StartCluster complete in 19.557685291s
	I0914 14:37:07.357184    1522 settings.go:142] acquiring lock: {Name:mkcccc97e247e7e1b2e556ccc64336c05a92af53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 14:37:07.357360    1522 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/17243-1006/kubeconfig
	I0914 14:37:07.357606    1522 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17243-1006/kubeconfig: {Name:mkeec13fc5a79792669e9cedabfbe21efeb27d3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 14:37:07.357803    1522 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0914 14:37:07.357856    1522 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:true]
	I0914 14:37:07.357902    1522 addons.go:69] Setting volumesnapshots=true in profile "addons-388000"
	I0914 14:37:07.357909    1522 addons.go:231] Setting addon volumesnapshots=true in "addons-388000"
	I0914 14:37:07.357912    1522 addons.go:69] Setting ingress=true in profile "addons-388000"
	I0914 14:37:07.357919    1522 addons.go:231] Setting addon ingress=true in "addons-388000"
	I0914 14:37:07.357926    1522 host.go:66] Checking if "addons-388000" exists ...
	I0914 14:37:07.357934    1522 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-388000"
	I0914 14:37:07.357942    1522 host.go:66] Checking if "addons-388000" exists ...
	I0914 14:37:07.357951    1522 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-388000"
	I0914 14:37:07.357967    1522 host.go:66] Checking if "addons-388000" exists ...
	I0914 14:37:07.357975    1522 addons.go:69] Setting ingress-dns=true in profile "addons-388000"
	I0914 14:37:07.357985    1522 addons.go:69] Setting metrics-server=true in profile "addons-388000"
	I0914 14:37:07.358004    1522 addons.go:231] Setting addon ingress-dns=true in "addons-388000"
	I0914 14:37:07.358008    1522 addons.go:231] Setting addon metrics-server=true in "addons-388000"
	I0914 14:37:07.358046    1522 host.go:66] Checking if "addons-388000" exists ...
	I0914 14:37:07.358051    1522 host.go:66] Checking if "addons-388000" exists ...
	I0914 14:37:07.358066    1522 addons.go:69] Setting inspektor-gadget=true in profile "addons-388000"
	I0914 14:37:07.358074    1522 addons.go:231] Setting addon inspektor-gadget=true in "addons-388000"
	I0914 14:37:07.358086    1522 host.go:66] Checking if "addons-388000" exists ...
	I0914 14:37:07.358133    1522 config.go:182] Loaded profile config "addons-388000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0914 14:37:07.358210    1522 addons.go:69] Setting registry=true in profile "addons-388000"
	I0914 14:37:07.358222    1522 addons.go:231] Setting addon registry=true in "addons-388000"
	I0914 14:37:07.358259    1522 addons.go:69] Setting cloud-spanner=true in profile "addons-388000"
	I0914 14:37:07.358263    1522 addons.go:69] Setting default-storageclass=true in profile "addons-388000"
	I0914 14:37:07.358265    1522 addons.go:231] Setting addon cloud-spanner=true in "addons-388000"
	I0914 14:37:07.358266    1522 host.go:66] Checking if "addons-388000" exists ...
	I0914 14:37:07.358273    1522 addons.go:69] Setting storage-provisioner=true in profile "addons-388000"
	I0914 14:37:07.358276    1522 host.go:66] Checking if "addons-388000" exists ...
	I0914 14:37:07.358278    1522 addons.go:231] Setting addon storage-provisioner=true in "addons-388000"
	I0914 14:37:07.358289    1522 host.go:66] Checking if "addons-388000" exists ...
	W0914 14:37:07.358332    1522 host.go:54] host status for "addons-388000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/monitor: connect: connection refused
	W0914 14:37:07.358339    1522 addons.go:277] "addons-388000" is not running, setting inspektor-gadget=true and skipping enablement (err=<nil>)
	W0914 14:37:07.358450    1522 host.go:54] host status for "addons-388000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/monitor: connect: connection refused
	W0914 14:37:07.358453    1522 addons.go:277] "addons-388000" is not running, setting volumesnapshots=true and skipping enablement (err=<nil>)
	W0914 14:37:07.358483    1522 host.go:54] host status for "addons-388000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/monitor: connect: connection refused
	W0914 14:37:07.358489    1522 addons.go:277] "addons-388000" is not running, setting cloud-spanner=true and skipping enablement (err=<nil>)
	W0914 14:37:07.358257    1522 host.go:54] host status for "addons-388000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/monitor: connect: connection refused
	W0914 14:37:07.358494    1522 addons.go:277] "addons-388000" is not running, setting ingress=true and skipping enablement (err=<nil>)
	I0914 14:37:07.358496    1522 addons.go:467] Verifying addon ingress=true in "addons-388000"
	W0914 14:37:07.358500    1522 host.go:54] host status for "addons-388000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/monitor: connect: connection refused
	W0914 14:37:07.358504    1522 addons.go:277] "addons-388000" is not running, setting storage-provisioner=true and skipping enablement (err=<nil>)
	I0914 14:37:07.363429    1522 out.go:177] * Verifying ingress addon...
	I0914 14:37:07.358269    1522 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-388000"
	W0914 14:37:07.358528    1522 host.go:54] host status for "addons-388000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/monitor: connect: connection refused
	I0914 14:37:07.358271    1522 addons.go:69] Setting gcp-auth=true in profile "addons-388000"
	W0914 14:37:07.358722    1522 host.go:54] host status for "addons-388000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/monitor: connect: connection refused
	W0914 14:37:07.370487    1522 addons.go:277] "addons-388000" is not running, setting ingress-dns=true and skipping enablement (err=<nil>)
	I0914 14:37:07.370528    1522 mustload.go:65] Loading cluster: addons-388000
	W0914 14:37:07.370533    1522 addons.go:277] "addons-388000" is not running, setting registry=true and skipping enablement (err=<nil>)
	I0914 14:37:07.370877    1522 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0914 14:37:07.371899    1522 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-388000" context rescaled to 1 replicas
	I0914 14:37:07.372685    1522 addons.go:231] Setting addon default-storageclass=true in "addons-388000"
	I0914 14:37:07.374445    1522 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0914 14:37:07.377503    1522 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I0914 14:37:07.377530    1522 addons.go:467] Verifying addon registry=true in "addons-388000"
	I0914 14:37:07.377544    1522 host.go:66] Checking if "addons-388000" exists ...
	I0914 14:37:07.377592    1522 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 14:37:07.377611    1522 config.go:182] Loaded profile config "addons-388000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0914 14:37:07.379668    1522 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0914 14:37:07.387418    1522 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0914 14:37:07.384502    1522 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0914 14:37:07.385215    1522 host.go:66] Checking if "addons-388000" exists ...
	I0914 14:37:07.385566    1522 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 14:37:07.399476    1522 out.go:177] * Verifying Kubernetes components...
	I0914 14:37:07.399484    1522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0914 14:37:07.405519    1522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 14:37:07.405539    1522 out.go:177] * Verifying registry addon...
	I0914 14:37:07.409300    1522 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0914 14:37:07.413413    1522 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0914 14:37:07.409310    1522 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/id_rsa Username:docker}
	I0914 14:37:07.409318    1522 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/id_rsa Username:docker}
	I0914 14:37:07.413772    1522 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0914 14:37:07.421473    1522 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0914 14:37:07.425266    1522 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 14:37:07.434436    1522 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0914 14:37:07.437375    1522 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=registry
	I0914 14:37:07.438456    1522 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0914 14:37:07.450436    1522 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0914 14:37:07.460462    1522 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0914 14:37:07.463476    1522 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0914 14:37:07.463485    1522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0914 14:37:07.463494    1522 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/id_rsa Username:docker}
	I0914 14:37:07.497507    1522 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0914 14:37:07.497516    1522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0914 14:37:07.503780    1522 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0914 14:37:07.503787    1522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0914 14:37:07.509075    1522 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 14:37:07.509081    1522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0914 14:37:07.516870    1522 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 14:37:07.522898    1522 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 14:37:07.539508    1522 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0914 14:37:07.539521    1522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0914 14:37:07.591865    1522 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0914 14:37:07.591879    1522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0914 14:37:07.635732    1522 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0914 14:37:07.635742    1522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0914 14:37:07.644322    1522 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0914 14:37:07.644333    1522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0914 14:37:07.649557    1522 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0914 14:37:07.649568    1522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0914 14:37:07.681313    1522 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0914 14:37:07.681325    1522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0914 14:37:07.685931    1522 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0914 14:37:07.685936    1522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0914 14:37:07.690914    1522 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0914 14:37:07.690921    1522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0914 14:37:07.695920    1522 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0914 14:37:07.695926    1522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0914 14:37:07.700851    1522 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0914 14:37:07.700856    1522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0914 14:37:07.705677    1522 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0914 14:37:08.213892    1522 start.go:917] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I0914 14:37:08.214323    1522 node_ready.go:35] waiting up to 6m0s for node "addons-388000" to be "Ready" ...
	I0914 14:37:08.215929    1522 node_ready.go:49] node "addons-388000" has status "Ready":"True"
	I0914 14:37:08.215948    1522 node_ready.go:38] duration metric: took 1.599458ms waiting for node "addons-388000" to be "Ready" ...
	I0914 14:37:08.215953    1522 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 14:37:08.218780    1522 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-6php8" in "kube-system" namespace to be "Ready" ...
	I0914 14:37:08.378405    1522 addons.go:467] Verifying addon metrics-server=true in "addons-388000"
	I0914 14:37:08.878056    1522 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.172383083s)
	I0914 14:37:08.878074    1522 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-388000"
	I0914 14:37:08.882346    1522 out.go:177] * Verifying csi-hostpath-driver addon...
	I0914 14:37:08.892719    1522 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0914 14:37:08.895508    1522 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0914 14:37:08.895515    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:08.901644    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:09.404389    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:09.734233    1522 pod_ready.go:97] error getting pod "coredns-5dd5756b68-6php8" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-6php8" not found
	I0914 14:37:09.734244    1522 pod_ready.go:81] duration metric: took 1.515495542s waiting for pod "coredns-5dd5756b68-6php8" in "kube-system" namespace to be "Ready" ...
	E0914 14:37:09.734250    1522 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-6php8" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-6php8" not found
	I0914 14:37:09.734253    1522 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-psn28" in "kube-system" namespace to be "Ready" ...
	I0914 14:37:09.736576    1522 pod_ready.go:92] pod "coredns-5dd5756b68-psn28" in "kube-system" namespace has status "Ready":"True"
	I0914 14:37:09.736583    1522 pod_ready.go:81] duration metric: took 2.327542ms waiting for pod "coredns-5dd5756b68-psn28" in "kube-system" namespace to be "Ready" ...
	I0914 14:37:09.736588    1522 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-388000" in "kube-system" namespace to be "Ready" ...
	I0914 14:37:09.739033    1522 pod_ready.go:92] pod "etcd-addons-388000" in "kube-system" namespace has status "Ready":"True"
	I0914 14:37:09.739038    1522 pod_ready.go:81] duration metric: took 2.447792ms waiting for pod "etcd-addons-388000" in "kube-system" namespace to be "Ready" ...
	I0914 14:37:09.739041    1522 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-388000" in "kube-system" namespace to be "Ready" ...
	I0914 14:37:09.741595    1522 pod_ready.go:92] pod "kube-apiserver-addons-388000" in "kube-system" namespace has status "Ready":"True"
	I0914 14:37:09.741601    1522 pod_ready.go:81] duration metric: took 2.556083ms waiting for pod "kube-apiserver-addons-388000" in "kube-system" namespace to be "Ready" ...
	I0914 14:37:09.741605    1522 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-388000" in "kube-system" namespace to be "Ready" ...
	I0914 14:37:09.904411    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:10.016583    1522 pod_ready.go:92] pod "kube-controller-manager-addons-388000" in "kube-system" namespace has status "Ready":"True"
	I0914 14:37:10.016591    1522 pod_ready.go:81] duration metric: took 274.98975ms waiting for pod "kube-controller-manager-addons-388000" in "kube-system" namespace to be "Ready" ...
	I0914 14:37:10.016595    1522 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8pbsf" in "kube-system" namespace to be "Ready" ...
	I0914 14:37:10.404994    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:10.417030    1522 pod_ready.go:92] pod "kube-proxy-8pbsf" in "kube-system" namespace has status "Ready":"True"
	I0914 14:37:10.417036    1522 pod_ready.go:81] duration metric: took 400.447833ms waiting for pod "kube-proxy-8pbsf" in "kube-system" namespace to be "Ready" ...
	I0914 14:37:10.417041    1522 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-388000" in "kube-system" namespace to be "Ready" ...
	I0914 14:37:10.816814    1522 pod_ready.go:92] pod "kube-scheduler-addons-388000" in "kube-system" namespace has status "Ready":"True"
	I0914 14:37:10.816823    1522 pod_ready.go:81] duration metric: took 399.789417ms waiting for pod "kube-scheduler-addons-388000" in "kube-system" namespace to be "Ready" ...
	I0914 14:37:10.816827    1522 pod_ready.go:38] duration metric: took 2.600935083s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 14:37:10.816835    1522 api_server.go:52] waiting for apiserver process to appear ...
	I0914 14:37:10.816886    1522 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 14:37:10.821727    1522 api_server.go:72] duration metric: took 3.437324417s to wait for apiserver process to appear ...
	I0914 14:37:10.821733    1522 api_server.go:88] waiting for apiserver healthz status ...
	I0914 14:37:10.821738    1522 api_server.go:253] Checking apiserver healthz at https://192.168.105.2:8443/healthz ...
	I0914 14:37:10.825342    1522 api_server.go:279] https://192.168.105.2:8443/healthz returned 200:
	ok
	I0914 14:37:10.826107    1522 api_server.go:141] control plane version: v1.28.1
	I0914 14:37:10.826114    1522 api_server.go:131] duration metric: took 4.378333ms to wait for apiserver health ...
	I0914 14:37:10.826117    1522 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 14:37:10.904363    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:11.018876    1522 system_pods.go:59] 10 kube-system pods found
	I0914 14:37:11.018886    1522 system_pods.go:61] "coredns-5dd5756b68-psn28" [50c0e128-9a93-456c-83af-dfbcda64eaa4] Running
	I0914 14:37:11.018891    1522 system_pods.go:61] "csi-hostpath-attacher-0" [29be2dba-12b9-4442-8c83-8d24fd054a90] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0914 14:37:11.018894    1522 system_pods.go:61] "csi-hostpath-resizer-0" [11fcc7a2-d176-442f-9cd6-04668da8d423] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0914 14:37:11.018898    1522 system_pods.go:61] "csi-hostpathplugin-b5k2m" [aa03259b-6f1a-4537-95f6-47e8cf8fcc39] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0914 14:37:11.018909    1522 system_pods.go:61] "etcd-addons-388000" [b91a9e99-fb10-4340-977e-536225df8415] Running
	I0914 14:37:11.018914    1522 system_pods.go:61] "kube-apiserver-addons-388000" [43fed39d-32f3-4b45-b43c-d9918758a66c] Running
	I0914 14:37:11.018917    1522 system_pods.go:61] "kube-controller-manager-addons-388000" [31eb0c68-03ca-4907-921b-14ccef970edf] Running
	I0914 14:37:11.018920    1522 system_pods.go:61] "kube-proxy-8pbsf" [e9d3ab50-7594-4360-8226-d37e954aca6e] Running
	I0914 14:37:11.018923    1522 system_pods.go:61] "kube-scheduler-addons-388000" [d931a34d-1c14-4544-80cd-ce847a1f1af8] Running
	I0914 14:37:11.018927    1522 system_pods.go:61] "metrics-server-7c66d45ddc-dtj78" [7b539063-f45b-4a15-97e7-6713ea57e519] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 14:37:11.018932    1522 system_pods.go:74] duration metric: took 192.817125ms to wait for pod list to return data ...
	I0914 14:37:11.018935    1522 default_sa.go:34] waiting for default service account to be created ...
	I0914 14:37:11.216117    1522 default_sa.go:45] found service account: "default"
	I0914 14:37:11.216127    1522 default_sa.go:55] duration metric: took 197.1925ms for default service account to be created ...
	I0914 14:37:11.216130    1522 system_pods.go:116] waiting for k8s-apps to be running ...
	I0914 14:37:11.404125    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:11.419144    1522 system_pods.go:86] 10 kube-system pods found
	I0914 14:37:11.419151    1522 system_pods.go:89] "coredns-5dd5756b68-psn28" [50c0e128-9a93-456c-83af-dfbcda64eaa4] Running
	I0914 14:37:11.419155    1522 system_pods.go:89] "csi-hostpath-attacher-0" [29be2dba-12b9-4442-8c83-8d24fd054a90] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0914 14:37:11.419158    1522 system_pods.go:89] "csi-hostpath-resizer-0" [11fcc7a2-d176-442f-9cd6-04668da8d423] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0914 14:37:11.419163    1522 system_pods.go:89] "csi-hostpathplugin-b5k2m" [aa03259b-6f1a-4537-95f6-47e8cf8fcc39] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0914 14:37:11.419167    1522 system_pods.go:89] "etcd-addons-388000" [b91a9e99-fb10-4340-977e-536225df8415] Running
	I0914 14:37:11.419169    1522 system_pods.go:89] "kube-apiserver-addons-388000" [43fed39d-32f3-4b45-b43c-d9918758a66c] Running
	I0914 14:37:11.419176    1522 system_pods.go:89] "kube-controller-manager-addons-388000" [31eb0c68-03ca-4907-921b-14ccef970edf] Running
	I0914 14:37:11.419178    1522 system_pods.go:89] "kube-proxy-8pbsf" [e9d3ab50-7594-4360-8226-d37e954aca6e] Running
	I0914 14:37:11.419180    1522 system_pods.go:89] "kube-scheduler-addons-388000" [d931a34d-1c14-4544-80cd-ce847a1f1af8] Running
	I0914 14:37:11.419183    1522 system_pods.go:89] "metrics-server-7c66d45ddc-dtj78" [7b539063-f45b-4a15-97e7-6713ea57e519] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 14:37:11.419189    1522 system_pods.go:126] duration metric: took 203.059ms to wait for k8s-apps to be running ...
	I0914 14:37:11.419193    1522 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 14:37:11.419242    1522 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 14:37:11.424702    1522 system_svc.go:56] duration metric: took 5.506625ms WaitForService to wait for kubelet.
	I0914 14:37:11.424708    1522 kubeadm.go:581] duration metric: took 4.040322208s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0914 14:37:11.424718    1522 node_conditions.go:102] verifying NodePressure condition ...
	I0914 14:37:11.616510    1522 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0914 14:37:11.616524    1522 node_conditions.go:123] node cpu capacity is 2
	I0914 14:37:11.616531    1522 node_conditions.go:105] duration metric: took 191.81375ms to run NodePressure ...
	I0914 14:37:11.616536    1522 start.go:228] waiting for startup goroutines ...
	I0914 14:37:11.904062    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:12.404356    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:12.904283    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:13.404719    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:13.905195    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:14.010940    1522 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0914 14:37:14.010958    1522 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/id_rsa Username:docker}
	I0914 14:37:14.050416    1522 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0914 14:37:14.056158    1522 addons.go:231] Setting addon gcp-auth=true in "addons-388000"
	I0914 14:37:14.056180    1522 host.go:66] Checking if "addons-388000" exists ...
	I0914 14:37:14.056914    1522 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0914 14:37:14.056921    1522 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/addons-388000/id_rsa Username:docker}
	I0914 14:37:14.098984    1522 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0914 14:37:14.102963    1522 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0914 14:37:14.106843    1522 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0914 14:37:14.106851    1522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0914 14:37:14.112250    1522 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0914 14:37:14.112259    1522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0914 14:37:14.117057    1522 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0914 14:37:14.117063    1522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5412 bytes)
	I0914 14:37:14.122524    1522 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0914 14:37:14.407542    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:14.453711    1522 addons.go:467] Verifying addon gcp-auth=true in "addons-388000"
	I0914 14:37:14.458827    1522 out.go:177] * Verifying gcp-auth addon...
	I0914 14:37:14.469206    1522 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0914 14:37:14.473873    1522 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0914 14:37:14.473883    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:14.477552    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:14.905449    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:14.981028    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:15.404241    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:15.481017    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:15.904406    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:15.981050    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:16.404161    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:16.481356    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:16.904348    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:16.980852    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:17.404432    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:17.480937    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:17.904061    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:17.980969    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:18.404491    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:18.481031    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:18.904020    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:18.981054    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:19.405323    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:19.480019    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:19.904276    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:19.980839    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:20.404204    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:20.481250    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:20.904037    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:20.981407    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:21.404239    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:21.481248    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:21.904261    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:21.981109    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:22.405094    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:22.481049    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:22.904407    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:22.981227    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:23.404066    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:23.480779    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:23.904000    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:23.980955    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:24.404182    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:24.480903    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:24.904034    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:24.980896    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:25.403993    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:25.480949    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 14:37:25.903717    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:25.981591    1522 kapi.go:107] duration metric: took 11.512675166s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0914 14:37:25.985811    1522 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-388000 cluster.
	I0914 14:37:25.990747    1522 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0914 14:37:25.993661    1522 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0914 14:37:26.404089    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:26.904132    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:27.405664    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:27.903941    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:28.403884    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:28.903901    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:29.404487    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:29.903852    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:30.404685    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:30.903890    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:31.403753    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:31.903926    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:32.404318    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:32.903835    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:33.403834    1522 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 14:37:33.903687    1522 kapi.go:107] duration metric: took 25.011601375s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0914 14:43:07.370409    1522 kapi.go:107] duration metric: took 6m0.008648916s to wait for app.kubernetes.io/name=ingress-nginx ...
	W0914 14:43:07.370479    1522 out.go:239] ! Enabling 'ingress' returned an error: running callbacks: [waiting for app.kubernetes.io/name=ingress-nginx pods: context deadline exceeded]
	I0914 14:43:07.418192    1522 kapi.go:107] duration metric: took 6m0.013534334s to wait for kubernetes.io/minikube-addons=registry ...
	W0914 14:43:07.418227    1522 out.go:239] ! Enabling 'registry' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=registry pods: context deadline exceeded]
	I0914 14:43:07.425587    1522 out.go:177] * Enabled addons: inspektor-gadget, volumesnapshots, cloud-spanner, storage-provisioner, ingress-dns, default-storageclass, metrics-server, gcp-auth, csi-hostpath-driver
	I0914 14:43:07.433636    1522 addons.go:502] enable addons completed in 6m0.084906709s: enabled=[inspektor-gadget volumesnapshots cloud-spanner storage-provisioner ingress-dns default-storageclass metrics-server gcp-auth csi-hostpath-driver]
	I0914 14:43:07.433650    1522 start.go:233] waiting for cluster config update ...
	I0914 14:43:07.433664    1522 start.go:242] writing updated cluster config ...
	I0914 14:43:07.433996    1522 ssh_runner.go:195] Run: rm -f paused
	I0914 14:43:07.464084    1522 start.go:600] kubectl: 1.27.2, cluster: 1.28.1 (minor skew: 1)
	I0914 14:43:07.467672    1522 out.go:177] * Done! kubectl is now configured to use "addons-388000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Thu 2023-09-14 21:36:36 UTC, ends at Thu 2023-09-14 21:56:45 UTC. --
	Sep 14 21:37:28 addons-388000 cri-dockerd[1056]: time="2023-09-14T21:37:28Z" level=info msg="Stop pulling image registry.k8s.io/sig-storage/livenessprobe:v2.8.0@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0: Status: Downloaded newer image for registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0"
	Sep 14 21:37:28 addons-388000 dockerd[1162]: time="2023-09-14T21:37:28.601133366Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 14 21:37:28 addons-388000 dockerd[1162]: time="2023-09-14T21:37:28.601186991Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 14 21:37:28 addons-388000 dockerd[1162]: time="2023-09-14T21:37:28.601201491Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 14 21:37:28 addons-388000 dockerd[1162]: time="2023-09-14T21:37:28.601212200Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 14 21:37:28 addons-388000 dockerd[1156]: time="2023-09-14T21:37:28.692071408Z" level=warning msg="reference for unknown type: " digest="sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8" remote="registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8"
	Sep 14 21:37:31 addons-388000 cri-dockerd[1056]: time="2023-09-14T21:37:31Z" level=info msg="Stop pulling image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8: Status: Downloaded newer image for registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8"
	Sep 14 21:37:31 addons-388000 dockerd[1162]: time="2023-09-14T21:37:31.232372201Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 14 21:37:31 addons-388000 dockerd[1162]: time="2023-09-14T21:37:31.232402326Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 14 21:37:31 addons-388000 dockerd[1162]: time="2023-09-14T21:37:31.232412909Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 14 21:37:31 addons-388000 dockerd[1162]: time="2023-09-14T21:37:31.232417493Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 14 21:37:31 addons-388000 dockerd[1156]: time="2023-09-14T21:37:31.325578326Z" level=warning msg="reference for unknown type: " digest="sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f" remote="registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f"
	Sep 14 21:37:33 addons-388000 cri-dockerd[1056]: time="2023-09-14T21:37:33Z" level=info msg="Stop pulling image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f: Status: Downloaded newer image for registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f"
	Sep 14 21:37:33 addons-388000 dockerd[1162]: time="2023-09-14T21:37:33.503964160Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 14 21:37:33 addons-388000 dockerd[1162]: time="2023-09-14T21:37:33.503991702Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 14 21:37:33 addons-388000 dockerd[1162]: time="2023-09-14T21:37:33.504000744Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 14 21:37:33 addons-388000 dockerd[1162]: time="2023-09-14T21:37:33.504006994Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 14 21:55:14 addons-388000 dockerd[1162]: time="2023-09-14T21:55:14.601399406Z" level=info msg="shim disconnected" id=e99b5961a5b90d9d962226a3f942a728e507531402beb7414a01fa9d79554d64 namespace=moby
	Sep 14 21:55:14 addons-388000 dockerd[1162]: time="2023-09-14T21:55:14.601432322Z" level=warning msg="cleaning up after shim disconnected" id=e99b5961a5b90d9d962226a3f942a728e507531402beb7414a01fa9d79554d64 namespace=moby
	Sep 14 21:55:14 addons-388000 dockerd[1162]: time="2023-09-14T21:55:14.601436822Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 14 21:55:14 addons-388000 dockerd[1156]: time="2023-09-14T21:55:14.601734604Z" level=info msg="ignoring event" container=e99b5961a5b90d9d962226a3f942a728e507531402beb7414a01fa9d79554d64 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 14 21:55:14 addons-388000 dockerd[1156]: time="2023-09-14T21:55:14.667931603Z" level=info msg="ignoring event" container=40cdfd9e591d6d222ceba6780cb17bfc909ac2ecbb55e2501c67b4a00e7499a9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 14 21:55:14 addons-388000 dockerd[1162]: time="2023-09-14T21:55:14.668071849Z" level=info msg="shim disconnected" id=40cdfd9e591d6d222ceba6780cb17bfc909ac2ecbb55e2501c67b4a00e7499a9 namespace=moby
	Sep 14 21:55:14 addons-388000 dockerd[1162]: time="2023-09-14T21:55:14.668117056Z" level=warning msg="cleaning up after shim disconnected" id=40cdfd9e591d6d222ceba6780cb17bfc909ac2ecbb55e2501c67b4a00e7499a9 namespace=moby
	Sep 14 21:55:14 addons-388000 dockerd[1162]: time="2023-09-14T21:55:14.668121222Z" level=info msg="cleaning up dead shim" namespace=moby
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID
	c6e7158ec87e6       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          19 minutes ago      Running             csi-snapshotter                          0                   23a9864c5e7a2
	8fbd96f503108       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          19 minutes ago      Running             csi-provisioner                          0                   23a9864c5e7a2
	5a28f3666ec4d       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            19 minutes ago      Running             liveness-probe                           0                   23a9864c5e7a2
	4a515f3dbd90e       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           19 minutes ago      Running             hostpath                                 0                   23a9864c5e7a2
	726bdbe627b06       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf                                 19 minutes ago      Running             gcp-auth                                 0                   039c490b8ce95
	c5e816aa3fb60       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                19 minutes ago      Running             node-driver-registrar                    0                   23a9864c5e7a2
	0574ef72c784a       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              19 minutes ago      Running             csi-resizer                              0                   928188ebbbe5c
	0af4f9c858980       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   19 minutes ago      Running             csi-external-health-monitor-controller   0                   23a9864c5e7a2
	9a3fe3bf72dd7       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             19 minutes ago      Running             csi-attacher                             0                   aec96cfd028be
	1f519e69776da       97e04611ad434                                                                                                                                19 minutes ago      Running             coredns                                  0                   6b82b02e01da4
	c36ca5fc76214       812f5241df7fd                                                                                                                                19 minutes ago      Running             kube-proxy                               0                   24118a5be8efa
	af45960dc2d7c       b4a5a57e99492                                                                                                                                19 minutes ago      Running             kube-scheduler                           0                   6dde63050aa99
	39f78945ed576       b29fb62480892                                                                                                                                19 minutes ago      Running             kube-apiserver                           0                   a02ab403a50ec
	f2717f532e595       8b6e1980b7584                                                                                                                                19 minutes ago      Running             kube-controller-manager                  0                   834af4f99b3bc
	5a63d0e8296f4       9cdd6470f48c8                                                                                                                                19 minutes ago      Running             etcd                                     0                   b2289ff5c077b
	
	* 
	* ==> coredns [1f519e69776d] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	[INFO] Reloading complete
	
	* 
	* ==> describe nodes <==
	* Name:               addons-388000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-388000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d7492f2ae2d9b6e62b385ffcd97ebad62c645e82
	                    minikube.k8s.io/name=addons-388000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_14T14_36_54_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-388000
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-388000"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 14 Sep 2023 21:36:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-388000
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 14 Sep 2023 21:56:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 14 Sep 2023 21:53:15 +0000   Thu, 14 Sep 2023 21:36:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 14 Sep 2023 21:53:15 +0000   Thu, 14 Sep 2023 21:36:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 14 Sep 2023 21:53:15 +0000   Thu, 14 Sep 2023 21:36:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 14 Sep 2023 21:53:15 +0000   Thu, 14 Sep 2023 21:36:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.2
	  Hostname:    addons-388000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	System Info:
	  Machine ID:                 ca8cf67d46214b1fbc59c14cf3d2d66f
	  System UUID:                ca8cf67d46214b1fbc59c14cf3d2d66f
	  Boot ID:                    386c1075-3226-461a-ab43-e16ad465a6c4
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.6
	  Kubelet Version:            v1.28.1
	  Kube-Proxy Version:         v1.28.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  gcp-auth                    gcp-auth-d4c87556c-pjjjl                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 coredns-5dd5756b68-psn28                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     19m
	  kube-system                 csi-hostpath-attacher-0                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 csi-hostpath-resizer-0                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 csi-hostpathplugin-b5k2m                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 etcd-addons-388000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         19m
	  kube-system                 kube-apiserver-addons-388000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-controller-manager-addons-388000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-proxy-8pbsf                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-scheduler-addons-388000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 19m   kube-proxy       
	  Normal  Starting                 19m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  19m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  19m   kubelet          Node addons-388000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m   kubelet          Node addons-388000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m   kubelet          Node addons-388000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                19m   kubelet          Node addons-388000 status is now: NodeReady
	  Normal  RegisteredNode           19m   node-controller  Node addons-388000 event: Registered Node addons-388000 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000000] KASLR disabled due to lack of seed
	[  +0.645091] EINJ: EINJ table not found.
	[  +0.506039] systemd-fstab-generator[117]: Ignoring "noauto" for root device
	[  +0.043466] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000824] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +4.211816] systemd-fstab-generator[481]: Ignoring "noauto" for root device
	[  +0.087452] systemd-fstab-generator[492]: Ignoring "noauto" for root device
	[  +0.529791] systemd-fstab-generator[754]: Ignoring "noauto" for root device
	[  +0.178247] systemd-fstab-generator[791]: Ignoring "noauto" for root device
	[  +0.078699] systemd-fstab-generator[802]: Ignoring "noauto" for root device
	[  +0.082696] systemd-fstab-generator[815]: Ignoring "noauto" for root device
	[  +1.243164] systemd-fstab-generator[973]: Ignoring "noauto" for root device
	[  +0.079535] systemd-fstab-generator[984]: Ignoring "noauto" for root device
	[  +0.081510] systemd-fstab-generator[995]: Ignoring "noauto" for root device
	[  +0.082103] systemd-fstab-generator[1006]: Ignoring "noauto" for root device
	[  +0.084560] systemd-fstab-generator[1049]: Ignoring "noauto" for root device
	[  +2.579762] systemd-fstab-generator[1149]: Ignoring "noauto" for root device
	[  +2.146558] kauditd_printk_skb: 53 callbacks suppressed
	[  +2.177617] systemd-fstab-generator[1466]: Ignoring "noauto" for root device
	[  +5.135787] systemd-fstab-generator[2333]: Ignoring "noauto" for root device
	[Sep14 21:37] kauditd_printk_skb: 41 callbacks suppressed
	[  +1.224924] TCP: eth0: Driver has suspect GRO implementation, TCP performance may be compromised.
	[  +5.069347] kauditd_printk_skb: 47 callbacks suppressed
	[  +8.062309] kauditd_printk_skb: 6 callbacks suppressed
	[  +6.104989] kauditd_printk_skb: 5 callbacks suppressed
	
	* 
	* ==> etcd [5a63d0e8296f] <==
	* {"level":"info","ts":"2023-09-14T21:36:50.716133Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-14T21:36:51.510944Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 is starting a new election at term 1"}
	{"level":"info","ts":"2023-09-14T21:36:51.511016Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-09-14T21:36:51.51104Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 received MsgPreVoteResp from c46d288d2fcb0590 at term 1"}
	{"level":"info","ts":"2023-09-14T21:36:51.511075Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 became candidate at term 2"}
	{"level":"info","ts":"2023-09-14T21:36:51.511094Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 received MsgVoteResp from c46d288d2fcb0590 at term 2"}
	{"level":"info","ts":"2023-09-14T21:36:51.511114Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 became leader at term 2"}
	{"level":"info","ts":"2023-09-14T21:36:51.511137Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c46d288d2fcb0590 elected leader c46d288d2fcb0590 at term 2"}
	{"level":"info","ts":"2023-09-14T21:36:51.511827Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"c46d288d2fcb0590","local-member-attributes":"{Name:addons-388000 ClientURLs:[https://192.168.105.2:2379]}","request-path":"/0/members/c46d288d2fcb0590/attributes","cluster-id":"6e03e7863b4f9c54","publish-timeout":"7s"}
	{"level":"info","ts":"2023-09-14T21:36:51.511953Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-14T21:36:51.512015Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-14T21:36:51.512221Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-14T21:36:51.512251Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-09-14T21:36:51.512273Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6e03e7863b4f9c54","local-member-id":"c46d288d2fcb0590","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-14T21:36:51.512322Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-14T21:36:51.512345Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-14T21:36:51.512368Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-14T21:36:51.512743Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.2:2379"}
	{"level":"info","ts":"2023-09-14T21:36:51.513582Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-09-14T21:46:51.09248Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":801}
	{"level":"info","ts":"2023-09-14T21:46:51.094243Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":801,"took":"1.342212ms","hash":1083412012}
	{"level":"info","ts":"2023-09-14T21:46:51.094259Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1083412012,"revision":801,"compact-revision":-1}
	{"level":"info","ts":"2023-09-14T21:51:51.097381Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":951}
	{"level":"info","ts":"2023-09-14T21:51:51.097919Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":951,"took":"269.454µs","hash":1387439011}
	{"level":"info","ts":"2023-09-14T21:51:51.097932Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1387439011,"revision":951,"compact-revision":801}
	
	* 
	* ==> gcp-auth [726bdbe627b0] <==
	* 2023/09/14 21:37:25 GCP Auth Webhook started!
	
	* 
	* ==> kernel <==
	*  21:56:45 up 20 min,  0 users,  load average: 0.11, 0.17, 0.17
	Linux addons-388000 5.10.57 #1 SMP PREEMPT Wed Sep 13 19:05:16 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [39f78945ed57] <==
	* I0914 21:44:51.695063       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0914 21:45:51.695412       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0914 21:46:51.695845       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0914 21:46:51.765612       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0914 21:47:51.695437       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0914 21:48:51.695685       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0914 21:49:51.695503       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0914 21:50:51.694805       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0914 21:51:51.695866       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0914 21:51:51.770993       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0914 21:52:51.695289       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0914 21:53:00.844205       1 watcher.go:245] watch chan error: etcdserver: mvcc: required revision has been compacted
	I0914 21:53:51.695037       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0914 21:54:51.695833       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0914 21:55:13.478412       1 controller.go:159] removing "v1beta1.metrics.k8s.io" from AggregationController failed with: resource not found
	E0914 21:55:19.745518       1 handler_proxy.go:137] error resolving kube-system/metrics-server: service "metrics-server" not found
	W0914 21:55:19.745550       1 handler_proxy.go:93] no RequestInfo found in the context
	E0914 21:55:19.745571       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0914 21:55:19.745579       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0914 21:56:19.746684       1 handler_proxy.go:137] error resolving kube-system/metrics-server: service "metrics-server" not found
	W0914 21:56:19.746702       1 handler_proxy.go:93] no RequestInfo found in the context
	E0914 21:56:19.746726       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0914 21:56:19.746731       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [f2717f532e59] <==
	* I0914 21:37:18.703329       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-7c66d45ddc" duration="5.130125ms"
	I0914 21:37:18.703353       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-7c66d45ddc" duration="13.667µs"
	I0914 21:37:21.717272       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0914 21:37:21.725039       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0914 21:37:22.734779       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0914 21:37:22.813793       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0914 21:37:23.747243       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0914 21:37:23.753327       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0914 21:37:23.816708       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0914 21:37:23.819180       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0914 21:37:23.821789       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0914 21:37:23.822117       1 event.go:307] "Event occurred" object="gcp-auth/gcp-auth-certs-create" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0914 21:37:23.822779       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0914 21:37:24.756088       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0914 21:37:24.759099       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0914 21:37:24.761716       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0914 21:37:24.762180       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0914 21:37:24.762196       1 event.go:307] "Event occurred" object="gcp-auth/gcp-auth-certs-patch" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0914 21:37:25.770929       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="gcp-auth/gcp-auth-d4c87556c" duration="2.310416ms"
	I0914 21:37:25.771746       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="gcp-auth/gcp-auth-d4c87556c" duration="11.791µs"
	I0914 21:37:53.005858       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0914 21:37:53.014644       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0914 21:37:54.003849       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0914 21:37:54.024141       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0914 21:55:13.485728       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-7c66d45ddc" duration="2.166µs"
	
	* 
	* ==> kube-proxy [c36ca5fc7621] <==
	* I0914 21:37:08.522854       1 server_others.go:69] "Using iptables proxy"
	I0914 21:37:08.529066       1 node.go:141] Successfully retrieved node IP: 192.168.105.2
	I0914 21:37:08.587870       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0914 21:37:08.587883       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0914 21:37:08.588459       1 server_others.go:152] "Using iptables Proxier"
	I0914 21:37:08.588486       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0914 21:37:08.588572       1 server.go:846] "Version info" version="v1.28.1"
	I0914 21:37:08.588578       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 21:37:08.589296       1 config.go:188] "Starting service config controller"
	I0914 21:37:08.589305       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0914 21:37:08.589315       1 config.go:97] "Starting endpoint slice config controller"
	I0914 21:37:08.589317       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0914 21:37:08.589522       1 config.go:315] "Starting node config controller"
	I0914 21:37:08.589524       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0914 21:37:08.690794       1 shared_informer.go:318] Caches are synced for node config
	I0914 21:37:08.690821       1 shared_informer.go:318] Caches are synced for service config
	I0914 21:37:08.690838       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [af45960dc2d7] <==
	* E0914 21:36:52.199210       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0914 21:36:52.199206       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0914 21:36:52.199236       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0914 21:36:52.199258       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0914 21:36:52.199265       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0914 21:36:52.199278       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0914 21:36:52.199281       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0914 21:36:52.199189       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0914 21:36:52.199288       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0914 21:36:52.199260       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0914 21:36:52.199247       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0914 21:36:52.199323       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0914 21:36:53.095318       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0914 21:36:53.095337       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0914 21:36:53.142146       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0914 21:36:53.142164       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0914 21:36:53.158912       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0914 21:36:53.159021       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0914 21:36:53.162940       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0914 21:36:53.163031       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0914 21:36:53.206403       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0914 21:36:53.206481       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0914 21:36:53.209535       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0914 21:36:53.209549       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0914 21:36:53.797539       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Thu 2023-09-14 21:36:36 UTC, ends at Thu 2023-09-14 21:56:45 UTC. --
	Sep 14 21:52:54 addons-388000 kubelet[2339]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 14 21:52:54 addons-388000 kubelet[2339]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 14 21:53:54 addons-388000 kubelet[2339]: E0914 21:53:54.524934    2339 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 14 21:53:54 addons-388000 kubelet[2339]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 14 21:53:54 addons-388000 kubelet[2339]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 14 21:53:54 addons-388000 kubelet[2339]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 14 21:54:54 addons-388000 kubelet[2339]: E0914 21:54:54.528203    2339 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 14 21:54:54 addons-388000 kubelet[2339]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 14 21:54:54 addons-388000 kubelet[2339]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 14 21:54:54 addons-388000 kubelet[2339]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 14 21:55:14 addons-388000 kubelet[2339]: I0914 21:55:14.853529    2339 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nrp9b\" (UniqueName: \"kubernetes.io/projected/7b539063-f45b-4a15-97e7-6713ea57e519-kube-api-access-nrp9b\") pod \"7b539063-f45b-4a15-97e7-6713ea57e519\" (UID: \"7b539063-f45b-4a15-97e7-6713ea57e519\") "
	Sep 14 21:55:14 addons-388000 kubelet[2339]: I0914 21:55:14.853570    2339 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/7b539063-f45b-4a15-97e7-6713ea57e519-tmp-dir\") pod \"7b539063-f45b-4a15-97e7-6713ea57e519\" (UID: \"7b539063-f45b-4a15-97e7-6713ea57e519\") "
	Sep 14 21:55:14 addons-388000 kubelet[2339]: I0914 21:55:14.853720    2339 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b539063-f45b-4a15-97e7-6713ea57e519-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "7b539063-f45b-4a15-97e7-6713ea57e519" (UID: "7b539063-f45b-4a15-97e7-6713ea57e519"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Sep 14 21:55:14 addons-388000 kubelet[2339]: I0914 21:55:14.856486    2339 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b539063-f45b-4a15-97e7-6713ea57e519-kube-api-access-nrp9b" (OuterVolumeSpecName: "kube-api-access-nrp9b") pod "7b539063-f45b-4a15-97e7-6713ea57e519" (UID: "7b539063-f45b-4a15-97e7-6713ea57e519"). InnerVolumeSpecName "kube-api-access-nrp9b". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 14 21:55:14 addons-388000 kubelet[2339]: I0914 21:55:14.955474    2339 reconciler_common.go:300] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/7b539063-f45b-4a15-97e7-6713ea57e519-tmp-dir\") on node \"addons-388000\" DevicePath \"\""
	Sep 14 21:55:14 addons-388000 kubelet[2339]: I0914 21:55:14.955491    2339 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-nrp9b\" (UniqueName: \"kubernetes.io/projected/7b539063-f45b-4a15-97e7-6713ea57e519-kube-api-access-nrp9b\") on node \"addons-388000\" DevicePath \"\""
	Sep 14 21:55:15 addons-388000 kubelet[2339]: I0914 21:55:15.230909    2339 scope.go:117] "RemoveContainer" containerID="e99b5961a5b90d9d962226a3f942a728e507531402beb7414a01fa9d79554d64"
	Sep 14 21:55:15 addons-388000 kubelet[2339]: I0914 21:55:15.241455    2339 scope.go:117] "RemoveContainer" containerID="e99b5961a5b90d9d962226a3f942a728e507531402beb7414a01fa9d79554d64"
	Sep 14 21:55:15 addons-388000 kubelet[2339]: E0914 21:55:15.242184    2339 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: e99b5961a5b90d9d962226a3f942a728e507531402beb7414a01fa9d79554d64" containerID="e99b5961a5b90d9d962226a3f942a728e507531402beb7414a01fa9d79554d64"
	Sep 14 21:55:15 addons-388000 kubelet[2339]: I0914 21:55:15.242221    2339 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"e99b5961a5b90d9d962226a3f942a728e507531402beb7414a01fa9d79554d64"} err="failed to get container status \"e99b5961a5b90d9d962226a3f942a728e507531402beb7414a01fa9d79554d64\": rpc error: code = Unknown desc = Error response from daemon: No such container: e99b5961a5b90d9d962226a3f942a728e507531402beb7414a01fa9d79554d64"
	Sep 14 21:55:16 addons-388000 kubelet[2339]: I0914 21:55:16.518655    2339 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="7b539063-f45b-4a15-97e7-6713ea57e519" path="/var/lib/kubelet/pods/7b539063-f45b-4a15-97e7-6713ea57e519/volumes"
	Sep 14 21:55:54 addons-388000 kubelet[2339]: E0914 21:55:54.525045    2339 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 14 21:55:54 addons-388000 kubelet[2339]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 14 21:55:54 addons-388000 kubelet[2339]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 14 21:55:54 addons-388000 kubelet[2339]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p addons-388000 -n addons-388000
helpers_test.go:261: (dbg) Run:  kubectl --context addons-388000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/CloudSpanner FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/CloudSpanner (818.09s)

                                                
                                    
x
+
TestCertOptions (10.12s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-963000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-963000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.846323667s)

                                                
                                                
-- stdout --
	* [cert-options-963000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17243
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17243-1006/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17243-1006/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node cert-options-963000 in cluster cert-options-963000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-963000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-963000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-963000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-963000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-963000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 89 (75.132458ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p cert-options-963000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-963000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 89
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-963000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-963000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-963000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 89 (40.822792ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p cert-options-963000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-963000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 89
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p cert-options-963000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2023-09-14 15:19:32.214136 -0700 PDT m=+2639.422100292
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-963000 -n cert-options-963000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-963000 -n cert-options-963000: exit status 7 (29.508125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-963000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-963000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-963000
--- FAIL: TestCertOptions (10.12s)
E0914 15:20:13.244294    1425 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/ingress-addon-legacy-438000/client.crt: no such file or directory
E0914 15:21:32.773312    1425 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/functional-398000/client.crt: no such file or directory

                                                
                                    
x
+
TestCertExpiration (195.46s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-334000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-334000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (10.056135916s)

                                                
                                                
-- stdout --
	* [cert-expiration-334000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17243
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17243-1006/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17243-1006/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node cert-expiration-334000 in cluster cert-expiration-334000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-334000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-334000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-334000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-334000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
E0914 15:22:29.379323    1425 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/ingress-addon-legacy-438000/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-334000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.231577666s)

                                                
                                                
-- stdout --
	* [cert-expiration-334000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17243
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17243-1006/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17243-1006/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node cert-expiration-334000 in cluster cert-expiration-334000
	* Restarting existing qemu2 VM for "cert-expiration-334000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-334000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-334000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-334000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-334000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17243
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17243-1006/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17243-1006/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node cert-expiration-334000 in cluster cert-expiration-334000
	* Restarting existing qemu2 VM for "cert-expiration-334000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-334000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-334000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2023-09-14 15:22:32.337763 -0700 PDT m=+2819.549609584
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-334000 -n cert-expiration-334000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-334000 -n cert-expiration-334000: exit status 7 (67.809792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-334000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-334000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-334000
--- FAIL: TestCertExpiration (195.46s)

                                                
                                    
x
+
TestDockerFlags (10.12s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-690000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-690000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.867776292s)

                                                
                                                
-- stdout --
	* [docker-flags-690000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17243
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17243-1006/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17243-1006/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node docker-flags-690000 in cluster docker-flags-690000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-690000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 15:19:12.132289    4313 out.go:296] Setting OutFile to fd 1 ...
	I0914 15:19:12.132411    4313 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 15:19:12.132414    4313 out.go:309] Setting ErrFile to fd 2...
	I0914 15:19:12.132416    4313 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 15:19:12.132542    4313 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17243-1006/.minikube/bin
	I0914 15:19:12.133589    4313 out.go:303] Setting JSON to false
	I0914 15:19:12.148542    4313 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2926,"bootTime":1694727026,"procs":414,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0914 15:19:12.148605    4313 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0914 15:19:12.152862    4313 out.go:177] * [docker-flags-690000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0914 15:19:12.160894    4313 out.go:177]   - MINIKUBE_LOCATION=17243
	I0914 15:19:12.160966    4313 notify.go:220] Checking for updates...
	I0914 15:19:12.164855    4313 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17243-1006/kubeconfig
	I0914 15:19:12.167865    4313 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 15:19:12.170889    4313 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 15:19:12.173839    4313 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17243-1006/.minikube
	I0914 15:19:12.176814    4313 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 15:19:12.180298    4313 config.go:182] Loaded profile config "force-systemd-flag-409000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0914 15:19:12.180362    4313 config.go:182] Loaded profile config "multinode-463000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0914 15:19:12.180415    4313 driver.go:373] Setting default libvirt URI to qemu:///system
	I0914 15:19:12.184864    4313 out.go:177] * Using the qemu2 driver based on user configuration
	I0914 15:19:12.191850    4313 start.go:298] selected driver: qemu2
	I0914 15:19:12.191855    4313 start.go:902] validating driver "qemu2" against <nil>
	I0914 15:19:12.191861    4313 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 15:19:12.193824    4313 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0914 15:19:12.196772    4313 out.go:177] * Automatically selected the socket_vmnet network
	I0914 15:19:12.199920    4313 start_flags.go:917] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0914 15:19:12.199956    4313 cni.go:84] Creating CNI manager for ""
	I0914 15:19:12.199963    4313 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0914 15:19:12.199967    4313 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0914 15:19:12.199976    4313 start_flags.go:321] config:
	{Name:docker-flags-690000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:docker-flags-690000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/ru
n/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 15:19:12.204077    4313 iso.go:125] acquiring lock: {Name:mkd5b68b252c18146a825ff5365883948b3c6983 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 15:19:12.207695    4313 out.go:177] * Starting control plane node docker-flags-690000 in cluster docker-flags-690000
	I0914 15:19:12.215871    4313 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0914 15:19:12.215890    4313 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0914 15:19:12.215903    4313 cache.go:57] Caching tarball of preloaded images
	I0914 15:19:12.215962    4313 preload.go:174] Found /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0914 15:19:12.215967    4313 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0914 15:19:12.216041    4313 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/docker-flags-690000/config.json ...
	I0914 15:19:12.216058    4313 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/docker-flags-690000/config.json: {Name:mk9e943c23245967a733c24c8aeff2ab74a884e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 15:19:12.216270    4313 start.go:365] acquiring machines lock for docker-flags-690000: {Name:mk831f2bd5a9a0c932c8a59972319d68edb53d07 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 15:19:12.216303    4313 start.go:369] acquired machines lock for "docker-flags-690000" in 23.791µs
	I0914 15:19:12.216320    4313 start.go:93] Provisioning new machine with config: &{Name:docker-flags-690000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root
SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:docker-flags-690000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion
:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 15:19:12.216356    4313 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 15:19:12.224806    4313 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0914 15:19:12.240665    4313 start.go:159] libmachine.API.Create for "docker-flags-690000" (driver="qemu2")
	I0914 15:19:12.240690    4313 client.go:168] LocalClient.Create starting
	I0914 15:19:12.240749    4313 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/ca.pem
	I0914 15:19:12.240778    4313 main.go:141] libmachine: Decoding PEM data...
	I0914 15:19:12.240794    4313 main.go:141] libmachine: Parsing certificate...
	I0914 15:19:12.240839    4313 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/cert.pem
	I0914 15:19:12.240861    4313 main.go:141] libmachine: Decoding PEM data...
	I0914 15:19:12.240870    4313 main.go:141] libmachine: Parsing certificate...
	I0914 15:19:12.241200    4313 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17243-1006/.minikube/cache/iso/arm64/minikube-v1.31.0-1694625400-17243-arm64.iso...
	I0914 15:19:12.360838    4313 main.go:141] libmachine: Creating SSH key...
	I0914 15:19:12.485668    4313 main.go:141] libmachine: Creating Disk image...
	I0914 15:19:12.485673    4313 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 15:19:12.485797    4313 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/docker-flags-690000/disk.qcow2.raw /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/docker-flags-690000/disk.qcow2
	I0914 15:19:12.494465    4313 main.go:141] libmachine: STDOUT: 
	I0914 15:19:12.494479    4313 main.go:141] libmachine: STDERR: 
	I0914 15:19:12.494530    4313 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/docker-flags-690000/disk.qcow2 +20000M
	I0914 15:19:12.501831    4313 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 15:19:12.501843    4313 main.go:141] libmachine: STDERR: 
	I0914 15:19:12.501867    4313 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/docker-flags-690000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/docker-flags-690000/disk.qcow2
	I0914 15:19:12.501872    4313 main.go:141] libmachine: Starting QEMU VM...
	I0914 15:19:12.501906    4313 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/docker-flags-690000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/docker-flags-690000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/docker-flags-690000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:64:92:dc:71:6a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/docker-flags-690000/disk.qcow2
	I0914 15:19:12.503443    4313 main.go:141] libmachine: STDOUT: 
	I0914 15:19:12.503458    4313 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 15:19:12.503478    4313 client.go:171] LocalClient.Create took 262.786458ms
	I0914 15:19:14.505672    4313 start.go:128] duration metric: createHost completed in 2.289347333s
	I0914 15:19:14.505733    4313 start.go:83] releasing machines lock for "docker-flags-690000", held for 2.289470042s
	W0914 15:19:14.505789    4313 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 15:19:14.522847    4313 out.go:177] * Deleting "docker-flags-690000" in qemu2 ...
	W0914 15:19:14.537940    4313 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 15:19:14.537961    4313 start.go:703] Will try again in 5 seconds ...
	I0914 15:19:19.540034    4313 start.go:365] acquiring machines lock for docker-flags-690000: {Name:mk831f2bd5a9a0c932c8a59972319d68edb53d07 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 15:19:19.540364    4313 start.go:369] acquired machines lock for "docker-flags-690000" in 219.209µs
	I0914 15:19:19.540487    4313 start.go:93] Provisioning new machine with config: &{Name:docker-flags-690000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root
SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:docker-flags-690000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion
:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 15:19:19.540706    4313 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 15:19:19.549596    4313 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0914 15:19:19.590819    4313 start.go:159] libmachine.API.Create for "docker-flags-690000" (driver="qemu2")
	I0914 15:19:19.590862    4313 client.go:168] LocalClient.Create starting
	I0914 15:19:19.590964    4313 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/ca.pem
	I0914 15:19:19.591018    4313 main.go:141] libmachine: Decoding PEM data...
	I0914 15:19:19.591037    4313 main.go:141] libmachine: Parsing certificate...
	I0914 15:19:19.591101    4313 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/cert.pem
	I0914 15:19:19.591137    4313 main.go:141] libmachine: Decoding PEM data...
	I0914 15:19:19.591156    4313 main.go:141] libmachine: Parsing certificate...
	I0914 15:19:19.591586    4313 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17243-1006/.minikube/cache/iso/arm64/minikube-v1.31.0-1694625400-17243-arm64.iso...
	I0914 15:19:19.715027    4313 main.go:141] libmachine: Creating SSH key...
	I0914 15:19:19.910197    4313 main.go:141] libmachine: Creating Disk image...
	I0914 15:19:19.910206    4313 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 15:19:19.910361    4313 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/docker-flags-690000/disk.qcow2.raw /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/docker-flags-690000/disk.qcow2
	I0914 15:19:19.919269    4313 main.go:141] libmachine: STDOUT: 
	I0914 15:19:19.919287    4313 main.go:141] libmachine: STDERR: 
	I0914 15:19:19.919341    4313 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/docker-flags-690000/disk.qcow2 +20000M
	I0914 15:19:19.926500    4313 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 15:19:19.926524    4313 main.go:141] libmachine: STDERR: 
	I0914 15:19:19.926545    4313 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/docker-flags-690000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/docker-flags-690000/disk.qcow2
	I0914 15:19:19.926550    4313 main.go:141] libmachine: Starting QEMU VM...
	I0914 15:19:19.926592    4313 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/docker-flags-690000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/docker-flags-690000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/docker-flags-690000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:a7:2f:c7:be:00 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/docker-flags-690000/disk.qcow2
	I0914 15:19:19.928155    4313 main.go:141] libmachine: STDOUT: 
	I0914 15:19:19.928169    4313 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 15:19:19.928184    4313 client.go:171] LocalClient.Create took 337.322875ms
	I0914 15:19:21.930355    4313 start.go:128] duration metric: createHost completed in 2.389662666s
	I0914 15:19:21.930452    4313 start.go:83] releasing machines lock for "docker-flags-690000", held for 2.39008025s
	W0914 15:19:21.930962    4313 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-690000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-690000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 15:19:21.941770    4313 out.go:177] 
	W0914 15:19:21.945833    4313 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 15:19:21.945881    4313 out.go:239] * 
	* 
	W0914 15:19:21.948408    4313 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 15:19:21.958710    4313 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-690000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-690000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-690000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 89 (75.997584ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p docker-flags-690000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-690000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 89
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p docker-flags-690000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p docker-flags-690000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-690000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-690000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 89 (44.829458ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p docker-flags-690000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-690000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 89
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-690000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p docker-flags-690000\"\n"
panic.go:523: *** TestDockerFlags FAILED at 2023-09-14 15:19:22.095969 -0700 PDT m=+2629.303715251
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-690000 -n docker-flags-690000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-690000 -n docker-flags-690000: exit status 7 (28.762ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-690000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-690000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-690000
--- FAIL: TestDockerFlags (10.12s)

                                                
                                    
x
+
TestForceSystemdFlag (12.03s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-409000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-409000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (11.822275125s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-409000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17243
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17243-1006/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17243-1006/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node force-systemd-flag-409000 in cluster force-systemd-flag-409000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-409000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 15:19:05.047345    4291 out.go:296] Setting OutFile to fd 1 ...
	I0914 15:19:05.047455    4291 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 15:19:05.047458    4291 out.go:309] Setting ErrFile to fd 2...
	I0914 15:19:05.047460    4291 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 15:19:05.047584    4291 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17243-1006/.minikube/bin
	I0914 15:19:05.048614    4291 out.go:303] Setting JSON to false
	I0914 15:19:05.063775    4291 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2919,"bootTime":1694727026,"procs":416,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0914 15:19:05.063829    4291 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0914 15:19:05.069648    4291 out.go:177] * [force-systemd-flag-409000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0914 15:19:05.076669    4291 out.go:177]   - MINIKUBE_LOCATION=17243
	I0914 15:19:05.080524    4291 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17243-1006/kubeconfig
	I0914 15:19:05.076758    4291 notify.go:220] Checking for updates...
	I0914 15:19:05.088523    4291 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 15:19:05.091507    4291 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 15:19:05.094548    4291 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17243-1006/.minikube
	I0914 15:19:05.097578    4291 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 15:19:05.100824    4291 config.go:182] Loaded profile config "force-systemd-env-071000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0914 15:19:05.100891    4291 config.go:182] Loaded profile config "multinode-463000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0914 15:19:05.100936    4291 driver.go:373] Setting default libvirt URI to qemu:///system
	I0914 15:19:05.105520    4291 out.go:177] * Using the qemu2 driver based on user configuration
	I0914 15:19:05.112494    4291 start.go:298] selected driver: qemu2
	I0914 15:19:05.112498    4291 start.go:902] validating driver "qemu2" against <nil>
	I0914 15:19:05.112503    4291 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 15:19:05.114412    4291 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0914 15:19:05.117470    4291 out.go:177] * Automatically selected the socket_vmnet network
	I0914 15:19:05.120643    4291 start_flags.go:904] Wait components to verify : map[apiserver:true system_pods:true]
	I0914 15:19:05.120661    4291 cni.go:84] Creating CNI manager for ""
	I0914 15:19:05.120668    4291 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0914 15:19:05.120672    4291 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0914 15:19:05.120681    4291 start_flags.go:321] config:
	{Name:force-systemd-flag-409000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:force-systemd-flag-409000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 15:19:05.124954    4291 iso.go:125] acquiring lock: {Name:mkd5b68b252c18146a825ff5365883948b3c6983 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 15:19:05.128523    4291 out.go:177] * Starting control plane node force-systemd-flag-409000 in cluster force-systemd-flag-409000
	I0914 15:19:05.136391    4291 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0914 15:19:05.136410    4291 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0914 15:19:05.136423    4291 cache.go:57] Caching tarball of preloaded images
	I0914 15:19:05.136488    4291 preload.go:174] Found /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0914 15:19:05.136495    4291 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0914 15:19:05.136558    4291 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/force-systemd-flag-409000/config.json ...
	I0914 15:19:05.136574    4291 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/force-systemd-flag-409000/config.json: {Name:mk342652de3d902f384d5a80cffb6360245a0939 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 15:19:05.136808    4291 start.go:365] acquiring machines lock for force-systemd-flag-409000: {Name:mk831f2bd5a9a0c932c8a59972319d68edb53d07 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 15:19:05.136845    4291 start.go:369] acquired machines lock for "force-systemd-flag-409000" in 28.083µs
	I0914 15:19:05.136863    4291 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-409000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:force-systemd-flag-409000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mount
UID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 15:19:05.136897    4291 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 15:19:05.145433    4291 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0914 15:19:05.162335    4291 start.go:159] libmachine.API.Create for "force-systemd-flag-409000" (driver="qemu2")
	I0914 15:19:05.162361    4291 client.go:168] LocalClient.Create starting
	I0914 15:19:05.162445    4291 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/ca.pem
	I0914 15:19:05.162473    4291 main.go:141] libmachine: Decoding PEM data...
	I0914 15:19:05.162483    4291 main.go:141] libmachine: Parsing certificate...
	I0914 15:19:05.162526    4291 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/cert.pem
	I0914 15:19:05.162546    4291 main.go:141] libmachine: Decoding PEM data...
	I0914 15:19:05.162559    4291 main.go:141] libmachine: Parsing certificate...
	I0914 15:19:05.162908    4291 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17243-1006/.minikube/cache/iso/arm64/minikube-v1.31.0-1694625400-17243-arm64.iso...
	I0914 15:19:05.281636    4291 main.go:141] libmachine: Creating SSH key...
	I0914 15:19:05.384518    4291 main.go:141] libmachine: Creating Disk image...
	I0914 15:19:05.384526    4291 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 15:19:05.384662    4291 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/force-systemd-flag-409000/disk.qcow2.raw /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/force-systemd-flag-409000/disk.qcow2
	I0914 15:19:05.393131    4291 main.go:141] libmachine: STDOUT: 
	I0914 15:19:05.393149    4291 main.go:141] libmachine: STDERR: 
	I0914 15:19:05.393216    4291 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/force-systemd-flag-409000/disk.qcow2 +20000M
	I0914 15:19:05.400471    4291 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 15:19:05.400484    4291 main.go:141] libmachine: STDERR: 
	I0914 15:19:05.400508    4291 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/force-systemd-flag-409000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/force-systemd-flag-409000/disk.qcow2
	I0914 15:19:05.400516    4291 main.go:141] libmachine: Starting QEMU VM...
	I0914 15:19:05.400557    4291 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/force-systemd-flag-409000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/force-systemd-flag-409000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/force-systemd-flag-409000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:b1:90:c2:27:ff -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/force-systemd-flag-409000/disk.qcow2
	I0914 15:19:05.402053    4291 main.go:141] libmachine: STDOUT: 
	I0914 15:19:05.402065    4291 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 15:19:05.402083    4291 client.go:171] LocalClient.Create took 239.722459ms
	I0914 15:19:07.404227    4291 start.go:128] duration metric: createHost completed in 2.267347416s
	I0914 15:19:07.404296    4291 start.go:83] releasing machines lock for "force-systemd-flag-409000", held for 2.267489125s
	W0914 15:19:07.404344    4291 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 15:19:07.414290    4291 out.go:177] * Deleting "force-systemd-flag-409000" in qemu2 ...
	W0914 15:19:07.434777    4291 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 15:19:07.434808    4291 start.go:703] Will try again in 5 seconds ...
	I0914 15:19:12.436785    4291 start.go:365] acquiring machines lock for force-systemd-flag-409000: {Name:mk831f2bd5a9a0c932c8a59972319d68edb53d07 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 15:19:14.505875    4291 start.go:369] acquired machines lock for "force-systemd-flag-409000" in 2.069095834s
	I0914 15:19:14.506042    4291 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-409000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:force-systemd-flag-409000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mount
UID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 15:19:14.506284    4291 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 15:19:14.513891    4291 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0914 15:19:14.558405    4291 start.go:159] libmachine.API.Create for "force-systemd-flag-409000" (driver="qemu2")
	I0914 15:19:14.558446    4291 client.go:168] LocalClient.Create starting
	I0914 15:19:14.558595    4291 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/ca.pem
	I0914 15:19:14.558650    4291 main.go:141] libmachine: Decoding PEM data...
	I0914 15:19:14.558674    4291 main.go:141] libmachine: Parsing certificate...
	I0914 15:19:14.558735    4291 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/cert.pem
	I0914 15:19:14.558770    4291 main.go:141] libmachine: Decoding PEM data...
	I0914 15:19:14.558783    4291 main.go:141] libmachine: Parsing certificate...
	I0914 15:19:14.559252    4291 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17243-1006/.minikube/cache/iso/arm64/minikube-v1.31.0-1694625400-17243-arm64.iso...
	I0914 15:19:14.686161    4291 main.go:141] libmachine: Creating SSH key...
	I0914 15:19:14.779397    4291 main.go:141] libmachine: Creating Disk image...
	I0914 15:19:14.779403    4291 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 15:19:14.779541    4291 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/force-systemd-flag-409000/disk.qcow2.raw /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/force-systemd-flag-409000/disk.qcow2
	I0914 15:19:14.788173    4291 main.go:141] libmachine: STDOUT: 
	I0914 15:19:14.788193    4291 main.go:141] libmachine: STDERR: 
	I0914 15:19:14.788255    4291 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/force-systemd-flag-409000/disk.qcow2 +20000M
	I0914 15:19:14.795656    4291 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 15:19:14.795669    4291 main.go:141] libmachine: STDERR: 
	I0914 15:19:14.795681    4291 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/force-systemd-flag-409000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/force-systemd-flag-409000/disk.qcow2
	I0914 15:19:14.795695    4291 main.go:141] libmachine: Starting QEMU VM...
	I0914 15:19:14.795736    4291 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/force-systemd-flag-409000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/force-systemd-flag-409000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/force-systemd-flag-409000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:c3:7b:60:26:d0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/force-systemd-flag-409000/disk.qcow2
	I0914 15:19:14.797239    4291 main.go:141] libmachine: STDOUT: 
	I0914 15:19:14.797251    4291 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 15:19:14.797264    4291 client.go:171] LocalClient.Create took 238.818208ms
	I0914 15:19:16.799441    4291 start.go:128] duration metric: createHost completed in 2.29314725s
	I0914 15:19:16.799504    4291 start.go:83] releasing machines lock for "force-systemd-flag-409000", held for 2.293631125s
	W0914 15:19:16.799913    4291 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-409000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-409000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 15:19:16.811709    4291 out.go:177] 
	W0914 15:19:16.816734    4291 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 15:19:16.816768    4291 out.go:239] * 
	* 
	W0914 15:19:16.819194    4291 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 15:19:16.828557    4291 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-409000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-409000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-409000 ssh "docker info --format {{.CgroupDriver}}": exit status 89 (75.54775ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p force-systemd-flag-409000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-409000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 89
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2023-09-14 15:19:16.920486 -0700 PDT m=+2624.128121251
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-409000 -n force-systemd-flag-409000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-409000 -n force-systemd-flag-409000: exit status 7 (33.859875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-409000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-409000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-409000
--- FAIL: TestForceSystemdFlag (12.03s)

                                                
                                    
x
+
TestForceSystemdEnv (9.92s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-071000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-071000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.717759875s)

                                                
                                                
-- stdout --
	* [force-systemd-env-071000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17243
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17243-1006/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17243-1006/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node force-systemd-env-071000 in cluster force-systemd-env-071000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-071000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 15:19:02.209942    4264 out.go:296] Setting OutFile to fd 1 ...
	I0914 15:19:02.210078    4264 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 15:19:02.210081    4264 out.go:309] Setting ErrFile to fd 2...
	I0914 15:19:02.210084    4264 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 15:19:02.210222    4264 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17243-1006/.minikube/bin
	I0914 15:19:02.211229    4264 out.go:303] Setting JSON to false
	I0914 15:19:02.227296    4264 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2916,"bootTime":1694727026,"procs":410,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0914 15:19:02.227377    4264 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0914 15:19:02.232326    4264 out.go:177] * [force-systemd-env-071000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0914 15:19:02.244254    4264 out.go:177]   - MINIKUBE_LOCATION=17243
	I0914 15:19:02.240318    4264 notify.go:220] Checking for updates...
	I0914 15:19:02.250262    4264 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17243-1006/kubeconfig
	I0914 15:19:02.254272    4264 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 15:19:02.258315    4264 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 15:19:02.261304    4264 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17243-1006/.minikube
	I0914 15:19:02.264292    4264 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0914 15:19:02.267588    4264 config.go:182] Loaded profile config "multinode-463000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0914 15:19:02.267652    4264 driver.go:373] Setting default libvirt URI to qemu:///system
	I0914 15:19:02.271148    4264 out.go:177] * Using the qemu2 driver based on user configuration
	I0914 15:19:02.278281    4264 start.go:298] selected driver: qemu2
	I0914 15:19:02.278285    4264 start.go:902] validating driver "qemu2" against <nil>
	I0914 15:19:02.278290    4264 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 15:19:02.280207    4264 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0914 15:19:02.283263    4264 out.go:177] * Automatically selected the socket_vmnet network
	I0914 15:19:02.286326    4264 start_flags.go:904] Wait components to verify : map[apiserver:true system_pods:true]
	I0914 15:19:02.286341    4264 cni.go:84] Creating CNI manager for ""
	I0914 15:19:02.286346    4264 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0914 15:19:02.286349    4264 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0914 15:19:02.286356    4264 start_flags.go:321] config:
	{Name:force-systemd-env-071000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:force-systemd-env-071000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 15:19:02.290042    4264 iso.go:125] acquiring lock: {Name:mkd5b68b252c18146a825ff5365883948b3c6983 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 15:19:02.297262    4264 out.go:177] * Starting control plane node force-systemd-env-071000 in cluster force-systemd-env-071000
	I0914 15:19:02.301276    4264 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0914 15:19:02.301301    4264 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0914 15:19:02.301311    4264 cache.go:57] Caching tarball of preloaded images
	I0914 15:19:02.301365    4264 preload.go:174] Found /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0914 15:19:02.301371    4264 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0914 15:19:02.301439    4264 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/force-systemd-env-071000/config.json ...
	I0914 15:19:02.301450    4264 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/force-systemd-env-071000/config.json: {Name:mkf91389af527bf867ddb6e6bd2446e641d950bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 15:19:02.301648    4264 start.go:365] acquiring machines lock for force-systemd-env-071000: {Name:mk831f2bd5a9a0c932c8a59972319d68edb53d07 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 15:19:02.301677    4264 start.go:369] acquired machines lock for "force-systemd-env-071000" in 23.166µs
	I0914 15:19:02.301688    4264 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-071000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kube
rnetesConfig:{KubernetesVersion:v1.28.1 ClusterName:force-systemd-env-071000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUI
D:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 15:19:02.301712    4264 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 15:19:02.309291    4264 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0914 15:19:02.322674    4264 start.go:159] libmachine.API.Create for "force-systemd-env-071000" (driver="qemu2")
	I0914 15:19:02.322701    4264 client.go:168] LocalClient.Create starting
	I0914 15:19:02.322758    4264 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/ca.pem
	I0914 15:19:02.322781    4264 main.go:141] libmachine: Decoding PEM data...
	I0914 15:19:02.322792    4264 main.go:141] libmachine: Parsing certificate...
	I0914 15:19:02.322833    4264 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/cert.pem
	I0914 15:19:02.322850    4264 main.go:141] libmachine: Decoding PEM data...
	I0914 15:19:02.322856    4264 main.go:141] libmachine: Parsing certificate...
	I0914 15:19:02.324441    4264 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17243-1006/.minikube/cache/iso/arm64/minikube-v1.31.0-1694625400-17243-arm64.iso...
	I0914 15:19:02.436824    4264 main.go:141] libmachine: Creating SSH key...
	I0914 15:19:02.522984    4264 main.go:141] libmachine: Creating Disk image...
	I0914 15:19:02.522990    4264 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 15:19:02.523139    4264 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/force-systemd-env-071000/disk.qcow2.raw /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/force-systemd-env-071000/disk.qcow2
	I0914 15:19:02.531595    4264 main.go:141] libmachine: STDOUT: 
	I0914 15:19:02.531609    4264 main.go:141] libmachine: STDERR: 
	I0914 15:19:02.531661    4264 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/force-systemd-env-071000/disk.qcow2 +20000M
	I0914 15:19:02.538737    4264 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 15:19:02.538760    4264 main.go:141] libmachine: STDERR: 
	I0914 15:19:02.538775    4264 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/force-systemd-env-071000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/force-systemd-env-071000/disk.qcow2
	I0914 15:19:02.538783    4264 main.go:141] libmachine: Starting QEMU VM...
	I0914 15:19:02.538814    4264 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/force-systemd-env-071000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/force-systemd-env-071000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/force-systemd-env-071000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:52:57:b9:db:3a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/force-systemd-env-071000/disk.qcow2
	I0914 15:19:02.540281    4264 main.go:141] libmachine: STDOUT: 
	I0914 15:19:02.540294    4264 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 15:19:02.540310    4264 client.go:171] LocalClient.Create took 217.609292ms
	I0914 15:19:04.542497    4264 start.go:128] duration metric: createHost completed in 2.240798625s
	I0914 15:19:04.542621    4264 start.go:83] releasing machines lock for "force-systemd-env-071000", held for 2.2409425s
	W0914 15:19:04.542775    4264 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 15:19:04.555040    4264 out.go:177] * Deleting "force-systemd-env-071000" in qemu2 ...
	W0914 15:19:04.574314    4264 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 15:19:04.574342    4264 start.go:703] Will try again in 5 seconds ...
	I0914 15:19:09.575734    4264 start.go:365] acquiring machines lock for force-systemd-env-071000: {Name:mk831f2bd5a9a0c932c8a59972319d68edb53d07 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 15:19:09.576202    4264 start.go:369] acquired machines lock for "force-systemd-env-071000" in 352.625µs
	I0914 15:19:09.576346    4264 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-071000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kube
rnetesConfig:{KubernetesVersion:v1.28.1 ClusterName:force-systemd-env-071000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUI
D:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 15:19:09.576548    4264 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 15:19:09.584058    4264 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0914 15:19:09.630188    4264 start.go:159] libmachine.API.Create for "force-systemd-env-071000" (driver="qemu2")
	I0914 15:19:09.630242    4264 client.go:168] LocalClient.Create starting
	I0914 15:19:09.630374    4264 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/ca.pem
	I0914 15:19:09.630444    4264 main.go:141] libmachine: Decoding PEM data...
	I0914 15:19:09.630465    4264 main.go:141] libmachine: Parsing certificate...
	I0914 15:19:09.630551    4264 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/cert.pem
	I0914 15:19:09.630592    4264 main.go:141] libmachine: Decoding PEM data...
	I0914 15:19:09.630604    4264 main.go:141] libmachine: Parsing certificate...
	I0914 15:19:09.631196    4264 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17243-1006/.minikube/cache/iso/arm64/minikube-v1.31.0-1694625400-17243-arm64.iso...
	I0914 15:19:09.757986    4264 main.go:141] libmachine: Creating SSH key...
	I0914 15:19:09.839518    4264 main.go:141] libmachine: Creating Disk image...
	I0914 15:19:09.839526    4264 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 15:19:09.839665    4264 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/force-systemd-env-071000/disk.qcow2.raw /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/force-systemd-env-071000/disk.qcow2
	I0914 15:19:09.847935    4264 main.go:141] libmachine: STDOUT: 
	I0914 15:19:09.847950    4264 main.go:141] libmachine: STDERR: 
	I0914 15:19:09.848003    4264 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/force-systemd-env-071000/disk.qcow2 +20000M
	I0914 15:19:09.855103    4264 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 15:19:09.855114    4264 main.go:141] libmachine: STDERR: 
	I0914 15:19:09.855126    4264 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/force-systemd-env-071000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/force-systemd-env-071000/disk.qcow2
	I0914 15:19:09.855133    4264 main.go:141] libmachine: Starting QEMU VM...
	I0914 15:19:09.855171    4264 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/force-systemd-env-071000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/force-systemd-env-071000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/force-systemd-env-071000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:37:c0:8c:e4:90 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/force-systemd-env-071000/disk.qcow2
	I0914 15:19:09.856634    4264 main.go:141] libmachine: STDOUT: 
	I0914 15:19:09.856647    4264 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 15:19:09.856659    4264 client.go:171] LocalClient.Create took 226.413833ms
	I0914 15:19:11.858831    4264 start.go:128] duration metric: createHost completed in 2.282297583s
	I0914 15:19:11.858934    4264 start.go:83] releasing machines lock for "force-systemd-env-071000", held for 2.282757792s
	W0914 15:19:11.859423    4264 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-071000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-071000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 15:19:11.869201    4264 out.go:177] 
	W0914 15:19:11.873257    4264 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 15:19:11.873280    4264 out.go:239] * 
	* 
	W0914 15:19:11.875797    4264 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 15:19:11.884197    4264 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-071000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-071000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-071000 ssh "docker info --format {{.CgroupDriver}}": exit status 89 (76.759917ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p force-systemd-env-071000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-071000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 89
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2023-09-14 15:19:11.976805 -0700 PDT m=+2619.184333917
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-071000 -n force-systemd-env-071000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-071000 -n force-systemd-env-071000: exit status 7 (33.670625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-071000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-071000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-071000
--- FAIL: TestForceSystemdEnv (9.92s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (31.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1626: (dbg) Run:  kubectl --context functional-398000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-398000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-7799dfb7c6-d6spp" [17388bde-57a9-4d82-ac4a-1ca198a0a870] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-7799dfb7c6-d6spp" [17388bde-57a9-4d82-ac4a-1ca198a0a870] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.006871958s
functional_test.go:1648: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.105.4:31150
functional_test.go:1660: error fetching http://192.168.105.4:31150: Get "http://192.168.105.4:31150": dial tcp 192.168.105.4:31150: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:31150: Get "http://192.168.105.4:31150": dial tcp 192.168.105.4:31150: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:31150: Get "http://192.168.105.4:31150": dial tcp 192.168.105.4:31150: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:31150: Get "http://192.168.105.4:31150": dial tcp 192.168.105.4:31150: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:31150: Get "http://192.168.105.4:31150": dial tcp 192.168.105.4:31150: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:31150: Get "http://192.168.105.4:31150": dial tcp 192.168.105.4:31150: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:31150: Get "http://192.168.105.4:31150": dial tcp 192.168.105.4:31150: connect: connection refused
functional_test.go:1680: failed to fetch http://192.168.105.4:31150: Get "http://192.168.105.4:31150": dial tcp 192.168.105.4:31150: connect: connection refused
functional_test.go:1597: service test failed - dumping debug information
functional_test.go:1598: -----------------------service failure post-mortem--------------------------------
functional_test.go:1601: (dbg) Run:  kubectl --context functional-398000 describe po hello-node-connect
functional_test.go:1605: hello-node pod describe:
Name:             hello-node-connect-7799dfb7c6-d6spp
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-398000/192.168.105.4
Start Time:       Thu, 14 Sep 2023 15:06:43 -0700
Labels:           app=hello-node-connect
pod-template-hash=7799dfb7c6
Annotations:      <none>
Status:           Running
IP:               10.244.0.7
IPs:
IP:           10.244.0.7
Controlled By:  ReplicaSet/hello-node-connect-7799dfb7c6
Containers:
echoserver-arm:
Container ID:   docker://288d26c97910eae0c39bfbb31fb73e6b1f6ca2e294c39ef16da1ce4e7de4f8e5
Image:          registry.k8s.io/echoserver-arm:1.8
Image ID:       docker-pullable://registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
Port:           <none>
Host Port:      <none>
State:          Terminated
Reason:       Error
Exit Code:    1
Started:      Thu, 14 Sep 2023 15:07:05 -0700
Finished:     Thu, 14 Sep 2023 15:07:05 -0700
Last State:     Terminated
Reason:       Error
Exit Code:    1
Started:      Thu, 14 Sep 2023 15:06:47 -0700
Finished:     Thu, 14 Sep 2023 15:06:47 -0700
Ready:          False
Restart Count:  2
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qwrjn (ro)
Conditions:
Type              Status
Initialized       True 
Ready             False 
ContainersReady   False 
PodScheduled      True 
Volumes:
kube-api-access-qwrjn:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age               From               Message
----     ------     ----              ----               -------
Normal   Scheduled  31s               default-scheduler  Successfully assigned default/hello-node-connect-7799dfb7c6-d6spp to functional-398000
Normal   Pulling    31s               kubelet            Pulling image "registry.k8s.io/echoserver-arm:1.8"
Normal   Pulled     27s               kubelet            Successfully pulled image "registry.k8s.io/echoserver-arm:1.8" in 3.027s (3.027s including waiting)
Normal   Created    9s (x3 over 27s)  kubelet            Created container echoserver-arm
Normal   Started    9s (x3 over 27s)  kubelet            Started container echoserver-arm
Normal   Pulled     9s (x2 over 27s)  kubelet            Container image "registry.k8s.io/echoserver-arm:1.8" already present on machine
Warning  BackOff    9s (x3 over 26s)  kubelet            Back-off restarting failed container echoserver-arm in pod hello-node-connect-7799dfb7c6-d6spp_default(17388bde-57a9-4d82-ac4a-1ca198a0a870)

                                                
                                                
functional_test.go:1607: (dbg) Run:  kubectl --context functional-398000 logs -l app=hello-node-connect
functional_test.go:1611: hello-node logs:
exec /usr/sbin/nginx: exec format error
functional_test.go:1613: (dbg) Run:  kubectl --context functional-398000 describe svc hello-node-connect
functional_test.go:1617: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.99.1.28
IPs:                      10.99.1.28
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  31150/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-398000 -n functional-398000
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 logs -n 25
helpers_test.go:252: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |                                                 Args                                                  |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| cache   | functional-398000 cache reload                                                                        | functional-398000 | jenkins | v1.31.2 | 14 Sep 23 15:05 PDT | 14 Sep 23 15:05 PDT |
	| ssh     | functional-398000 ssh                                                                                 | functional-398000 | jenkins | v1.31.2 | 14 Sep 23 15:05 PDT | 14 Sep 23 15:05 PDT |
	|         | sudo crictl inspecti                                                                                  |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                                                          |                   |         |         |                     |                     |
	| cache   | delete                                                                                                | minikube          | jenkins | v1.31.2 | 14 Sep 23 15:05 PDT | 14 Sep 23 15:05 PDT |
	|         | registry.k8s.io/pause:3.1                                                                             |                   |         |         |                     |                     |
	| cache   | delete                                                                                                | minikube          | jenkins | v1.31.2 | 14 Sep 23 15:05 PDT | 14 Sep 23 15:05 PDT |
	|         | registry.k8s.io/pause:latest                                                                          |                   |         |         |                     |                     |
	| kubectl | functional-398000 kubectl --                                                                          | functional-398000 | jenkins | v1.31.2 | 14 Sep 23 15:05 PDT | 14 Sep 23 15:05 PDT |
	|         | --context functional-398000                                                                           |                   |         |         |                     |                     |
	|         | get pods                                                                                              |                   |         |         |                     |                     |
	| start   | -p functional-398000                                                                                  | functional-398000 | jenkins | v1.31.2 | 14 Sep 23 15:05 PDT | 14 Sep 23 15:06 PDT |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision                              |                   |         |         |                     |                     |
	|         | --wait=all                                                                                            |                   |         |         |                     |                     |
	| service | invalid-svc -p                                                                                        | functional-398000 | jenkins | v1.31.2 | 14 Sep 23 15:06 PDT |                     |
	|         | functional-398000                                                                                     |                   |         |         |                     |                     |
	| config  | functional-398000 config unset                                                                        | functional-398000 | jenkins | v1.31.2 | 14 Sep 23 15:06 PDT | 14 Sep 23 15:06 PDT |
	|         | cpus                                                                                                  |                   |         |         |                     |                     |
	| cp      | functional-398000 cp                                                                                  | functional-398000 | jenkins | v1.31.2 | 14 Sep 23 15:06 PDT | 14 Sep 23 15:06 PDT |
	|         | testdata/cp-test.txt                                                                                  |                   |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                              |                   |         |         |                     |                     |
	| config  | functional-398000 config get                                                                          | functional-398000 | jenkins | v1.31.2 | 14 Sep 23 15:06 PDT |                     |
	|         | cpus                                                                                                  |                   |         |         |                     |                     |
	| config  | functional-398000 config set                                                                          | functional-398000 | jenkins | v1.31.2 | 14 Sep 23 15:06 PDT | 14 Sep 23 15:06 PDT |
	|         | cpus 2                                                                                                |                   |         |         |                     |                     |
	| ssh     | functional-398000 ssh -n                                                                              | functional-398000 | jenkins | v1.31.2 | 14 Sep 23 15:06 PDT | 14 Sep 23 15:06 PDT |
	|         | functional-398000 sudo cat                                                                            |                   |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                              |                   |         |         |                     |                     |
	| config  | functional-398000 config get                                                                          | functional-398000 | jenkins | v1.31.2 | 14 Sep 23 15:06 PDT | 14 Sep 23 15:06 PDT |
	|         | cpus                                                                                                  |                   |         |         |                     |                     |
	| config  | functional-398000 config unset                                                                        | functional-398000 | jenkins | v1.31.2 | 14 Sep 23 15:06 PDT | 14 Sep 23 15:06 PDT |
	|         | cpus                                                                                                  |                   |         |         |                     |                     |
	| cp      | functional-398000 cp functional-398000:/home/docker/cp-test.txt                                       | functional-398000 | jenkins | v1.31.2 | 14 Sep 23 15:06 PDT | 14 Sep 23 15:06 PDT |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd341327751/001/cp-test.txt |                   |         |         |                     |                     |
	| config  | functional-398000 config get                                                                          | functional-398000 | jenkins | v1.31.2 | 14 Sep 23 15:06 PDT |                     |
	|         | cpus                                                                                                  |                   |         |         |                     |                     |
	| ssh     | functional-398000 ssh echo                                                                            | functional-398000 | jenkins | v1.31.2 | 14 Sep 23 15:06 PDT | 14 Sep 23 15:06 PDT |
	|         | hello                                                                                                 |                   |         |         |                     |                     |
	| ssh     | functional-398000 ssh -n                                                                              | functional-398000 | jenkins | v1.31.2 | 14 Sep 23 15:06 PDT | 14 Sep 23 15:06 PDT |
	|         | functional-398000 sudo cat                                                                            |                   |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                              |                   |         |         |                     |                     |
	| ssh     | functional-398000 ssh cat                                                                             | functional-398000 | jenkins | v1.31.2 | 14 Sep 23 15:06 PDT | 14 Sep 23 15:06 PDT |
	|         | /etc/hostname                                                                                         |                   |         |         |                     |                     |
	| tunnel  | functional-398000 tunnel                                                                              | functional-398000 | jenkins | v1.31.2 | 14 Sep 23 15:06 PDT |                     |
	|         | --alsologtostderr                                                                                     |                   |         |         |                     |                     |
	| tunnel  | functional-398000 tunnel                                                                              | functional-398000 | jenkins | v1.31.2 | 14 Sep 23 15:06 PDT |                     |
	|         | --alsologtostderr                                                                                     |                   |         |         |                     |                     |
	| tunnel  | functional-398000 tunnel                                                                              | functional-398000 | jenkins | v1.31.2 | 14 Sep 23 15:06 PDT |                     |
	|         | --alsologtostderr                                                                                     |                   |         |         |                     |                     |
	| addons  | functional-398000 addons list                                                                         | functional-398000 | jenkins | v1.31.2 | 14 Sep 23 15:06 PDT | 14 Sep 23 15:06 PDT |
	| addons  | functional-398000 addons list                                                                         | functional-398000 | jenkins | v1.31.2 | 14 Sep 23 15:06 PDT | 14 Sep 23 15:06 PDT |
	|         | -o json                                                                                               |                   |         |         |                     |                     |
	| service | functional-398000 service                                                                             | functional-398000 | jenkins | v1.31.2 | 14 Sep 23 15:06 PDT | 14 Sep 23 15:06 PDT |
	|         | hello-node-connect --url                                                                              |                   |         |         |                     |                     |
	|---------|-------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/14 15:05:50
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.21.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0914 15:05:50.191514    2793 out.go:296] Setting OutFile to fd 1 ...
	I0914 15:05:50.191654    2793 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 15:05:50.191656    2793 out.go:309] Setting ErrFile to fd 2...
	I0914 15:05:50.191658    2793 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 15:05:50.191810    2793 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17243-1006/.minikube/bin
	I0914 15:05:50.192850    2793 out.go:303] Setting JSON to false
	I0914 15:05:50.208410    2793 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2124,"bootTime":1694727026,"procs":405,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0914 15:05:50.208504    2793 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0914 15:05:50.212321    2793 out.go:177] * [functional-398000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0914 15:05:50.219159    2793 out.go:177]   - MINIKUBE_LOCATION=17243
	I0914 15:05:50.223310    2793 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17243-1006/kubeconfig
	I0914 15:05:50.219238    2793 notify.go:220] Checking for updates...
	I0914 15:05:50.231364    2793 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 15:05:50.234356    2793 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 15:05:50.237365    2793 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17243-1006/.minikube
	I0914 15:05:50.240237    2793 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 15:05:50.243673    2793 config.go:182] Loaded profile config "functional-398000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0914 15:05:50.243729    2793 driver.go:373] Setting default libvirt URI to qemu:///system
	I0914 15:05:50.248376    2793 out.go:177] * Using the qemu2 driver based on existing profile
	I0914 15:05:50.255339    2793 start.go:298] selected driver: qemu2
	I0914 15:05:50.255341    2793 start.go:902] validating driver "qemu2" against &{Name:functional-398000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.1 ClusterName:functional-398000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 15:05:50.255379    2793 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 15:05:50.257113    2793 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 15:05:50.257135    2793 cni.go:84] Creating CNI manager for ""
	I0914 15:05:50.257141    2793 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0914 15:05:50.257145    2793 start_flags.go:321] config:
	{Name:functional-398000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:functional-398000 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 15:05:50.260954    2793 iso.go:125] acquiring lock: {Name:mkd5b68b252c18146a825ff5365883948b3c6983 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 15:05:50.268274    2793 out.go:177] * Starting control plane node functional-398000 in cluster functional-398000
	I0914 15:05:50.272313    2793 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0914 15:05:50.272326    2793 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0914 15:05:50.272335    2793 cache.go:57] Caching tarball of preloaded images
	I0914 15:05:50.272384    2793 preload.go:174] Found /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0914 15:05:50.272388    2793 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0914 15:05:50.272448    2793 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/functional-398000/config.json ...
	I0914 15:05:50.272719    2793 start.go:365] acquiring machines lock for functional-398000: {Name:mk831f2bd5a9a0c932c8a59972319d68edb53d07 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 15:05:50.272745    2793 start.go:369] acquired machines lock for "functional-398000" in 22.292µs
	I0914 15:05:50.272752    2793 start.go:96] Skipping create...Using existing machine configuration
	I0914 15:05:50.272756    2793 fix.go:54] fixHost starting: 
	I0914 15:05:50.273336    2793 fix.go:102] recreateIfNeeded on functional-398000: state=Running err=<nil>
	W0914 15:05:50.273344    2793 fix.go:128] unexpected machine state, will restart: <nil>
	I0914 15:05:50.276268    2793 out.go:177] * Updating the running qemu2 "functional-398000" VM ...
	I0914 15:05:50.284120    2793 machine.go:88] provisioning docker machine ...
	I0914 15:05:50.284129    2793 buildroot.go:166] provisioning hostname "functional-398000"
	I0914 15:05:50.284159    2793 main.go:141] libmachine: Using SSH client type: native
	I0914 15:05:50.284396    2793 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1051e8760] 0x1051eaed0 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0914 15:05:50.284400    2793 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-398000 && echo "functional-398000" | sudo tee /etc/hostname
	I0914 15:05:50.361463    2793 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-398000
	
	I0914 15:05:50.361503    2793 main.go:141] libmachine: Using SSH client type: native
	I0914 15:05:50.361734    2793 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1051e8760] 0x1051eaed0 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0914 15:05:50.361741    2793 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-398000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-398000/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-398000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 15:05:50.432871    2793 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 15:05:50.432880    2793 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17243-1006/.minikube CaCertPath:/Users/jenkins/minikube-integration/17243-1006/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17243-1006/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17243-1006/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17243-1006/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17243-1006/.minikube}
	I0914 15:05:50.432887    2793 buildroot.go:174] setting up certificates
	I0914 15:05:50.432890    2793 provision.go:83] configureAuth start
	I0914 15:05:50.432893    2793 provision.go:138] copyHostCerts
	I0914 15:05:50.432962    2793 exec_runner.go:144] found /Users/jenkins/minikube-integration/17243-1006/.minikube/ca.pem, removing ...
	I0914 15:05:50.432965    2793 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17243-1006/.minikube/ca.pem
	I0914 15:05:50.433076    2793 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17243-1006/.minikube/ca.pem (1082 bytes)
	I0914 15:05:50.433249    2793 exec_runner.go:144] found /Users/jenkins/minikube-integration/17243-1006/.minikube/cert.pem, removing ...
	I0914 15:05:50.433250    2793 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17243-1006/.minikube/cert.pem
	I0914 15:05:50.433293    2793 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17243-1006/.minikube/cert.pem (1123 bytes)
	I0914 15:05:50.433378    2793 exec_runner.go:144] found /Users/jenkins/minikube-integration/17243-1006/.minikube/key.pem, removing ...
	I0914 15:05:50.433379    2793 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17243-1006/.minikube/key.pem
	I0914 15:05:50.433415    2793 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17243-1006/.minikube/key.pem (1675 bytes)
	I0914 15:05:50.433590    2793 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17243-1006/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17243-1006/.minikube/certs/ca-key.pem org=jenkins.functional-398000 san=[192.168.105.4 192.168.105.4 localhost 127.0.0.1 minikube functional-398000]
	I0914 15:05:50.531162    2793 provision.go:172] copyRemoteCerts
	I0914 15:05:50.531197    2793 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 15:05:50.531204    2793 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/functional-398000/id_rsa Username:docker}
	I0914 15:05:50.570706    2793 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0914 15:05:50.577249    2793 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0914 15:05:50.584704    2793 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0914 15:05:50.592028    2793 provision.go:86] duration metric: configureAuth took 159.135417ms
	I0914 15:05:50.592045    2793 buildroot.go:189] setting minikube options for container-runtime
	I0914 15:05:50.592181    2793 config.go:182] Loaded profile config "functional-398000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0914 15:05:50.592222    2793 main.go:141] libmachine: Using SSH client type: native
	I0914 15:05:50.592442    2793 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1051e8760] 0x1051eaed0 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0914 15:05:50.592445    2793 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0914 15:05:50.663924    2793 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0914 15:05:50.663930    2793 buildroot.go:70] root file system type: tmpfs
	I0914 15:05:50.663982    2793 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0914 15:05:50.664042    2793 main.go:141] libmachine: Using SSH client type: native
	I0914 15:05:50.664275    2793 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1051e8760] 0x1051eaed0 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0914 15:05:50.664312    2793 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0914 15:05:50.739438    2793 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0914 15:05:50.739484    2793 main.go:141] libmachine: Using SSH client type: native
	I0914 15:05:50.739707    2793 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1051e8760] 0x1051eaed0 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0914 15:05:50.739713    2793 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0914 15:05:50.812921    2793 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 15:05:50.812927    2793 machine.go:91] provisioned docker machine in 528.816709ms
	I0914 15:05:50.812931    2793 start.go:300] post-start starting for "functional-398000" (driver="qemu2")
	I0914 15:05:50.812935    2793 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 15:05:50.812978    2793 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 15:05:50.812985    2793 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/functional-398000/id_rsa Username:docker}
	I0914 15:05:50.851154    2793 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 15:05:50.852804    2793 info.go:137] Remote host: Buildroot 2021.02.12
	I0914 15:05:50.852808    2793 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17243-1006/.minikube/addons for local assets ...
	I0914 15:05:50.852862    2793 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17243-1006/.minikube/files for local assets ...
	I0914 15:05:50.852961    2793 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17243-1006/.minikube/files/etc/ssl/certs/14252.pem -> 14252.pem in /etc/ssl/certs
	I0914 15:05:50.853064    2793 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17243-1006/.minikube/files/etc/test/nested/copy/1425/hosts -> hosts in /etc/test/nested/copy/1425
	I0914 15:05:50.853097    2793 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1425
	I0914 15:05:50.856619    2793 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17243-1006/.minikube/files/etc/ssl/certs/14252.pem --> /etc/ssl/certs/14252.pem (1708 bytes)
	I0914 15:05:50.863798    2793 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17243-1006/.minikube/files/etc/test/nested/copy/1425/hosts --> /etc/test/nested/copy/1425/hosts (40 bytes)
	I0914 15:05:50.870966    2793 start.go:303] post-start completed in 58.031625ms
	I0914 15:05:50.870972    2793 fix.go:56] fixHost completed within 598.231084ms
	I0914 15:05:50.871014    2793 main.go:141] libmachine: Using SSH client type: native
	I0914 15:05:50.871241    2793 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1051e8760] 0x1051eaed0 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0914 15:05:50.871244    2793 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0914 15:05:50.943403    2793 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694729150.982006168
	
	I0914 15:05:50.943409    2793 fix.go:206] guest clock: 1694729150.982006168
	I0914 15:05:50.943412    2793 fix.go:219] Guest: 2023-09-14 15:05:50.982006168 -0700 PDT Remote: 2023-09-14 15:05:50.870973 -0700 PDT m=+0.698647876 (delta=111.033168ms)
	I0914 15:05:50.943426    2793 fix.go:190] guest clock delta is within tolerance: 111.033168ms
	I0914 15:05:50.943428    2793 start.go:83] releasing machines lock for "functional-398000", held for 670.696958ms
	I0914 15:05:50.943771    2793 ssh_runner.go:195] Run: cat /version.json
	I0914 15:05:50.943778    2793 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/functional-398000/id_rsa Username:docker}
	I0914 15:05:50.943793    2793 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 15:05:50.943813    2793 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/functional-398000/id_rsa Username:docker}
	I0914 15:05:50.983274    2793 ssh_runner.go:195] Run: systemctl --version
	I0914 15:05:51.023354    2793 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 15:05:51.025275    2793 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 15:05:51.025313    2793 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 15:05:51.028674    2793 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0914 15:05:51.028678    2793 start.go:469] detecting cgroup driver to use...
	I0914 15:05:51.028731    2793 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 15:05:51.034320    2793 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0914 15:05:51.037390    2793 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0914 15:05:51.040956    2793 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0914 15:05:51.040978    2793 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0914 15:05:51.044746    2793 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0914 15:05:51.048456    2793 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0914 15:05:51.051835    2793 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0914 15:05:51.055215    2793 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 15:05:51.058291    2793 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0914 15:05:51.061195    2793 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 15:05:51.064662    2793 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 15:05:51.067643    2793 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 15:05:51.153640    2793 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0914 15:05:51.160387    2793 start.go:469] detecting cgroup driver to use...
	I0914 15:05:51.160450    2793 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0914 15:05:51.167319    2793 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 15:05:51.173686    2793 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 15:05:51.180475    2793 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 15:05:51.185380    2793 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0914 15:05:51.190250    2793 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 15:05:51.195315    2793 ssh_runner.go:195] Run: which cri-dockerd
	I0914 15:05:51.196676    2793 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0914 15:05:51.199934    2793 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0914 15:05:51.205156    2793 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0914 15:05:51.307875    2793 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0914 15:05:51.403555    2793 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0914 15:05:51.403565    2793 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0914 15:05:51.409606    2793 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 15:05:51.498236    2793 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0914 15:06:02.775724    2793 ssh_runner.go:235] Completed: sudo systemctl restart docker: (11.2777565s)
	I0914 15:06:02.775793    2793 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0914 15:06:02.861377    2793 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0914 15:06:02.946495    2793 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0914 15:06:03.031610    2793 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 15:06:03.113047    2793 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0914 15:06:03.120621    2793 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 15:06:03.235275    2793 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0914 15:06:03.262223    2793 start.go:516] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0914 15:06:03.262290    2793 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0914 15:06:03.264561    2793 start.go:537] Will wait 60s for crictl version
	I0914 15:06:03.264586    2793 ssh_runner.go:195] Run: which crictl
	I0914 15:06:03.265972    2793 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 15:06:03.278472    2793 start.go:553] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.6
	RuntimeApiVersion:  v1alpha2
	I0914 15:06:03.278565    2793 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0914 15:06:03.286632    2793 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0914 15:06:03.297267    2793 out.go:204] * Preparing Kubernetes v1.28.1 on Docker 24.0.6 ...
	I0914 15:06:03.297402    2793 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0914 15:06:03.304031    2793 out.go:177]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I0914 15:06:03.307159    2793 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0914 15:06:03.307194    2793 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0914 15:06:03.312955    2793 docker.go:636] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-398000
	registry.k8s.io/kube-apiserver:v1.28.1
	registry.k8s.io/kube-proxy:v1.28.1
	registry.k8s.io/kube-controller-manager:v1.28.1
	registry.k8s.io/kube-scheduler:v1.28.1
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I0914 15:06:03.312963    2793 docker.go:566] Images already preloaded, skipping extraction
	I0914 15:06:03.313006    2793 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0914 15:06:03.327032    2793 docker.go:636] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-398000
	registry.k8s.io/kube-apiserver:v1.28.1
	registry.k8s.io/kube-proxy:v1.28.1
	registry.k8s.io/kube-controller-manager:v1.28.1
	registry.k8s.io/kube-scheduler:v1.28.1
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I0914 15:06:03.327037    2793 cache_images.go:84] Images are preloaded, skipping loading
	I0914 15:06:03.327085    2793 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0914 15:06:03.334503    2793 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I0914 15:06:03.334520    2793 cni.go:84] Creating CNI manager for ""
	I0914 15:06:03.334525    2793 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0914 15:06:03.334529    2793 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0914 15:06:03.334541    2793 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.4 APIServerPort:8441 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-398000 NodeName:functional-398000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.4"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.4 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOp
ts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 15:06:03.334595    2793 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.4
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-398000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.4
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.4"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 15:06:03.334622    2793 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=functional-398000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:functional-398000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:}
	I0914 15:06:03.334681    2793 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0914 15:06:03.337603    2793 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 15:06:03.337626    2793 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 15:06:03.340521    2793 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0914 15:06:03.345553    2793 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 15:06:03.350808    2793 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1953 bytes)
	I0914 15:06:03.356080    2793 ssh_runner.go:195] Run: grep 192.168.105.4	control-plane.minikube.internal$ /etc/hosts
	I0914 15:06:03.357505    2793 certs.go:56] Setting up /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/functional-398000 for IP: 192.168.105.4
	I0914 15:06:03.357511    2793 certs.go:190] acquiring lock for shared ca certs: {Name:mkd19d6e2143685b57ba1e0d43c4081bbdb26a79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 15:06:03.357637    2793 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/17243-1006/.minikube/ca.key
	I0914 15:06:03.357673    2793 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/17243-1006/.minikube/proxy-client-ca.key
	I0914 15:06:03.357732    2793 certs.go:315] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/functional-398000/client.key
	I0914 15:06:03.357777    2793 certs.go:315] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/functional-398000/apiserver.key.942c473b
	I0914 15:06:03.357810    2793 certs.go:315] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/functional-398000/proxy-client.key
	I0914 15:06:03.357944    2793 certs.go:437] found cert: /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/Users/jenkins/minikube-integration/17243-1006/.minikube/certs/1425.pem (1338 bytes)
	W0914 15:06:03.357966    2793 certs.go:433] ignoring /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/Users/jenkins/minikube-integration/17243-1006/.minikube/certs/1425_empty.pem, impossibly tiny 0 bytes
	I0914 15:06:03.357974    2793 certs.go:437] found cert: /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/Users/jenkins/minikube-integration/17243-1006/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 15:06:03.357992    2793 certs.go:437] found cert: /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/Users/jenkins/minikube-integration/17243-1006/.minikube/certs/ca.pem (1082 bytes)
	I0914 15:06:03.358013    2793 certs.go:437] found cert: /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/Users/jenkins/minikube-integration/17243-1006/.minikube/certs/cert.pem (1123 bytes)
	I0914 15:06:03.358029    2793 certs.go:437] found cert: /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/Users/jenkins/minikube-integration/17243-1006/.minikube/certs/key.pem (1675 bytes)
	I0914 15:06:03.358075    2793 certs.go:437] found cert: /Users/jenkins/minikube-integration/17243-1006/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/17243-1006/.minikube/files/etc/ssl/certs/14252.pem (1708 bytes)
	I0914 15:06:03.358399    2793 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/functional-398000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0914 15:06:03.365078    2793 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/functional-398000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0914 15:06:03.372490    2793 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/functional-398000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 15:06:03.379943    2793 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/functional-398000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0914 15:06:03.387372    2793 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17243-1006/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 15:06:03.393950    2793 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17243-1006/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0914 15:06:03.400830    2793 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17243-1006/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 15:06:03.407599    2793 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17243-1006/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0914 15:06:03.414199    2793 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/1425.pem --> /usr/share/ca-certificates/1425.pem (1338 bytes)
	I0914 15:06:03.420933    2793 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17243-1006/.minikube/files/etc/ssl/certs/14252.pem --> /usr/share/ca-certificates/14252.pem (1708 bytes)
	I0914 15:06:03.428113    2793 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17243-1006/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 15:06:03.435113    2793 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 15:06:03.440158    2793 ssh_runner.go:195] Run: openssl version
	I0914 15:06:03.441878    2793 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1425.pem && ln -fs /usr/share/ca-certificates/1425.pem /etc/ssl/certs/1425.pem"
	I0914 15:06:03.445090    2793 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1425.pem
	I0914 15:06:03.446564    2793 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 14 22:04 /usr/share/ca-certificates/1425.pem
	I0914 15:06:03.446584    2793 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1425.pem
	I0914 15:06:03.448364    2793 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1425.pem /etc/ssl/certs/51391683.0"
	I0914 15:06:03.451285    2793 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14252.pem && ln -fs /usr/share/ca-certificates/14252.pem /etc/ssl/certs/14252.pem"
	I0914 15:06:03.454129    2793 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14252.pem
	I0914 15:06:03.455633    2793 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 14 22:04 /usr/share/ca-certificates/14252.pem
	I0914 15:06:03.455654    2793 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14252.pem
	I0914 15:06:03.457519    2793 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14252.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 15:06:03.460708    2793 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 15:06:03.464075    2793 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 15:06:03.465537    2793 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 14 21:36 /usr/share/ca-certificates/minikubeCA.pem
	I0914 15:06:03.465553    2793 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 15:06:03.467478    2793 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 15:06:03.470159    2793 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0914 15:06:03.471449    2793 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 15:06:03.473198    2793 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 15:06:03.475111    2793 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 15:06:03.476929    2793 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 15:06:03.478854    2793 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 15:06:03.480691    2793 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 15:06:03.482473    2793 kubeadm.go:404] StartCluster: {Name:functional-398000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.28.1 ClusterName:functional-398000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExp
iration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 15:06:03.482545    2793 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0914 15:06:03.490968    2793 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 15:06:03.493843    2793 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0914 15:06:03.493851    2793 kubeadm.go:636] restartCluster start
	I0914 15:06:03.493876    2793 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0914 15:06:03.496796    2793 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0914 15:06:03.497100    2793 kubeconfig.go:92] found "functional-398000" server: "https://192.168.105.4:8441"
	I0914 15:06:03.497828    2793 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0914 15:06:03.500838    2793 kubeadm.go:602] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -22,7 +22,7 @@
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.105.4"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    enable-admission-plugins: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     allocate-node-cidrs: "true"
	
	-- /stdout --
	I0914 15:06:03.500841    2793 kubeadm.go:1128] stopping kube-system containers ...
	I0914 15:06:03.500879    2793 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0914 15:06:03.512036    2793 docker.go:462] Stopping containers: [ced7f3bb1f36 747a721aa0e2 bb8f303a73ac 7cd12005ec6a fe836ba5c6a8 d96c262a8374 00a1d0079318 b43acab7a6d9 39f241352e4d 743bccd0cd99 99a190bec20c 71d5d5337566 8d6c1faddf3d 05a1d231bffb 13caa4eea591 583d767acda0 8bb3c4ebbcc3 c7bcdf692052 e3f2a0969c76 73bc9492575d e83fd940d930 a9d97ea6ea65 63d80538b00b 81f27b2bc930]
	I0914 15:06:03.512088    2793 ssh_runner.go:195] Run: docker stop ced7f3bb1f36 747a721aa0e2 bb8f303a73ac 7cd12005ec6a fe836ba5c6a8 d96c262a8374 00a1d0079318 b43acab7a6d9 39f241352e4d 743bccd0cd99 99a190bec20c 71d5d5337566 8d6c1faddf3d 05a1d231bffb 13caa4eea591 583d767acda0 8bb3c4ebbcc3 c7bcdf692052 e3f2a0969c76 73bc9492575d e83fd940d930 a9d97ea6ea65 63d80538b00b 81f27b2bc930
	I0914 15:06:03.518775    2793 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0914 15:06:03.607549    2793 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 15:06:03.611765    2793 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Sep 14 22:04 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5653 Sep 14 22:04 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Sep 14 22:04 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5605 Sep 14 22:04 /etc/kubernetes/scheduler.conf
	
	I0914 15:06:03.611810    2793 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I0914 15:06:03.614981    2793 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I0914 15:06:03.617953    2793 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I0914 15:06:03.621053    2793 kubeadm.go:166] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0914 15:06:03.621072    2793 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 15:06:03.624317    2793 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I0914 15:06:03.627495    2793 kubeadm.go:166] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0914 15:06:03.627517    2793 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 15:06:03.630291    2793 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 15:06:03.632976    2793 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0914 15:06:03.632979    2793 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 15:06:03.653955    2793 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 15:06:04.034516    2793 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0914 15:06:04.144572    2793 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 15:06:04.177318    2793 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0914 15:06:04.213278    2793 api_server.go:52] waiting for apiserver process to appear ...
	I0914 15:06:04.213343    2793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 15:06:04.218522    2793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 15:06:04.724058    2793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 15:06:05.224042    2793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 15:06:05.228291    2793 api_server.go:72] duration metric: took 1.015041584s to wait for apiserver process to appear ...
	I0914 15:06:05.228296    2793 api_server.go:88] waiting for apiserver healthz status ...
	I0914 15:06:05.228303    2793 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0914 15:06:06.516720    2793 api_server.go:279] https://192.168.105.4:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 15:06:06.516728    2793 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 15:06:06.516733    2793 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0914 15:06:06.557888    2793 api_server.go:279] https://192.168.105.4:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 15:06:06.557899    2793 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 15:06:07.059991    2793 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0914 15:06:07.064333    2793 api_server.go:279] https://192.168.105.4:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0914 15:06:07.064340    2793 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0914 15:06:07.559958    2793 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0914 15:06:07.565209    2793 api_server.go:279] https://192.168.105.4:8441/healthz returned 200:
	ok
	I0914 15:06:07.570600    2793 api_server.go:141] control plane version: v1.28.1
	I0914 15:06:07.570609    2793 api_server.go:131] duration metric: took 2.342368917s to wait for apiserver health ...
	I0914 15:06:07.570613    2793 cni.go:84] Creating CNI manager for ""
	I0914 15:06:07.570619    2793 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0914 15:06:07.576109    2793 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 15:06:07.579854    2793 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 15:06:07.583260    2793 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0914 15:06:07.588063    2793 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 15:06:07.592722    2793 system_pods.go:59] 6 kube-system pods found
	I0914 15:06:07.592732    2793 system_pods.go:61] "coredns-5dd5756b68-kws7l" [303cfff0-b738-480e-8109-c41092fc0de7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 15:06:07.592735    2793 system_pods.go:61] "etcd-functional-398000" [58b40509-b883-4c79-9406-14ce077f564f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0914 15:06:07.592737    2793 system_pods.go:61] "kube-apiserver-functional-398000" [2ecfb982-da9f-4587-b334-d31829aed648] Pending
	I0914 15:06:07.592741    2793 system_pods.go:61] "kube-controller-manager-functional-398000" [3536f371-529f-48a2-a844-84bda81fd7ed] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0914 15:06:07.592744    2793 system_pods.go:61] "kube-proxy-vvbs7" [ef8383c7-8450-48b7-9bbf-a7cb13a545be] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0914 15:06:07.592746    2793 system_pods.go:61] "kube-scheduler-functional-398000" [ee71633f-934e-42e3-8662-a659e9b30606] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0914 15:06:07.592748    2793 system_pods.go:74] duration metric: took 4.681667ms to wait for pod list to return data ...
	I0914 15:06:07.592751    2793 node_conditions.go:102] verifying NodePressure condition ...
	I0914 15:06:07.594417    2793 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0914 15:06:07.594424    2793 node_conditions.go:123] node cpu capacity is 2
	I0914 15:06:07.594428    2793 node_conditions.go:105] duration metric: took 1.6755ms to run NodePressure ...
	I0914 15:06:07.594435    2793 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 15:06:07.730403    2793 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0914 15:06:07.734017    2793 kubeadm.go:787] kubelet initialised
	I0914 15:06:07.734022    2793 kubeadm.go:788] duration metric: took 3.611375ms waiting for restarted kubelet to initialise ...
	I0914 15:06:07.734025    2793 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 15:06:07.736791    2793 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-kws7l" in "kube-system" namespace to be "Ready" ...
	I0914 15:06:09.748333    2793 pod_ready.go:102] pod "coredns-5dd5756b68-kws7l" in "kube-system" namespace has status "Ready":"False"
	I0914 15:06:12.247591    2793 pod_ready.go:102] pod "coredns-5dd5756b68-kws7l" in "kube-system" namespace has status "Ready":"False"
	I0914 15:06:14.248058    2793 pod_ready.go:102] pod "coredns-5dd5756b68-kws7l" in "kube-system" namespace has status "Ready":"False"
	I0914 15:06:14.747965    2793 pod_ready.go:92] pod "coredns-5dd5756b68-kws7l" in "kube-system" namespace has status "Ready":"True"
	I0914 15:06:14.747971    2793 pod_ready.go:81] duration metric: took 7.011348375s waiting for pod "coredns-5dd5756b68-kws7l" in "kube-system" namespace to be "Ready" ...
	I0914 15:06:14.747976    2793 pod_ready.go:78] waiting up to 4m0s for pod "etcd-functional-398000" in "kube-system" namespace to be "Ready" ...
	I0914 15:06:16.757634    2793 pod_ready.go:102] pod "etcd-functional-398000" in "kube-system" namespace has status "Ready":"False"
	I0914 15:06:18.758241    2793 pod_ready.go:102] pod "etcd-functional-398000" in "kube-system" namespace has status "Ready":"False"
	I0914 15:06:21.257965    2793 pod_ready.go:102] pod "etcd-functional-398000" in "kube-system" namespace has status "Ready":"False"
	I0914 15:06:23.258023    2793 pod_ready.go:92] pod "etcd-functional-398000" in "kube-system" namespace has status "Ready":"True"
	I0914 15:06:23.258031    2793 pod_ready.go:81] duration metric: took 8.510265125s waiting for pod "etcd-functional-398000" in "kube-system" namespace to be "Ready" ...
	I0914 15:06:23.258035    2793 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-functional-398000" in "kube-system" namespace to be "Ready" ...
	I0914 15:06:23.260651    2793 pod_ready.go:92] pod "kube-apiserver-functional-398000" in "kube-system" namespace has status "Ready":"True"
	I0914 15:06:23.260654    2793 pod_ready.go:81] duration metric: took 2.617167ms waiting for pod "kube-apiserver-functional-398000" in "kube-system" namespace to be "Ready" ...
	I0914 15:06:23.260658    2793 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-functional-398000" in "kube-system" namespace to be "Ready" ...
	I0914 15:06:23.263153    2793 pod_ready.go:92] pod "kube-controller-manager-functional-398000" in "kube-system" namespace has status "Ready":"True"
	I0914 15:06:23.263156    2793 pod_ready.go:81] duration metric: took 2.49575ms waiting for pod "kube-controller-manager-functional-398000" in "kube-system" namespace to be "Ready" ...
	I0914 15:06:23.263159    2793 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-vvbs7" in "kube-system" namespace to be "Ready" ...
	I0914 15:06:23.265322    2793 pod_ready.go:92] pod "kube-proxy-vvbs7" in "kube-system" namespace has status "Ready":"True"
	I0914 15:06:23.265324    2793 pod_ready.go:81] duration metric: took 2.163708ms waiting for pod "kube-proxy-vvbs7" in "kube-system" namespace to be "Ready" ...
	I0914 15:06:23.265327    2793 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-functional-398000" in "kube-system" namespace to be "Ready" ...
	I0914 15:06:23.267526    2793 pod_ready.go:92] pod "kube-scheduler-functional-398000" in "kube-system" namespace has status "Ready":"True"
	I0914 15:06:23.267529    2793 pod_ready.go:81] duration metric: took 2.199709ms waiting for pod "kube-scheduler-functional-398000" in "kube-system" namespace to be "Ready" ...
	I0914 15:06:23.267532    2793 pod_ready.go:38] duration metric: took 15.533892667s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 15:06:23.267540    2793 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 15:06:23.271325    2793 ops.go:34] apiserver oom_adj: -16
	I0914 15:06:23.271329    2793 kubeadm.go:640] restartCluster took 19.7779725s
	I0914 15:06:23.271331    2793 kubeadm.go:406] StartCluster complete in 19.789360583s
	I0914 15:06:23.271338    2793 settings.go:142] acquiring lock: {Name:mkcccc97e247e7e1b2e556ccc64336c05a92af53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 15:06:23.271421    2793 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/17243-1006/kubeconfig
	I0914 15:06:23.271714    2793 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17243-1006/kubeconfig: {Name:mkeec13fc5a79792669e9cedabfbe21efeb27d3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 15:06:23.271946    2793 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0914 15:06:23.271996    2793 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0914 15:06:23.272026    2793 addons.go:69] Setting storage-provisioner=true in profile "functional-398000"
	I0914 15:06:23.272031    2793 addons.go:231] Setting addon storage-provisioner=true in "functional-398000"
	W0914 15:06:23.272033    2793 addons.go:240] addon storage-provisioner should already be in state true
	I0914 15:06:23.272043    2793 addons.go:69] Setting default-storageclass=true in profile "functional-398000"
	I0914 15:06:23.272051    2793 host.go:66] Checking if "functional-398000" exists ...
	I0914 15:06:23.272051    2793 config.go:182] Loaded profile config "functional-398000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0914 15:06:23.272074    2793 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-398000"
	W0914 15:06:23.272297    2793 host.go:54] host status for "functional-398000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/functional-398000/monitor: connect: connection refused
	W0914 15:06:23.272303    2793 addons.go:277] "functional-398000" is not running, setting storage-provisioner=true and skipping enablement (err=<nil>)
	I0914 15:06:23.274133    2793 kapi.go:248] "coredns" deployment in "kube-system" namespace and "functional-398000" context rescaled to 1 replicas
	I0914 15:06:23.274142    2793 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 15:06:23.279631    2793 out.go:177] * Verifying Kubernetes components...
	I0914 15:06:23.275736    2793 addons.go:231] Setting addon default-storageclass=true in "functional-398000"
	W0914 15:06:23.279643    2793 addons.go:240] addon default-storageclass should already be in state true
	I0914 15:06:23.279658    2793 host.go:66] Checking if "functional-398000" exists ...
	I0914 15:06:23.280371    2793 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 15:06:23.283667    2793 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 15:06:23.283674    2793 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/functional-398000/id_rsa Username:docker}
	I0914 15:06:23.283686    2793 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 15:06:23.310922    2793 node_ready.go:35] waiting up to 6m0s for node "functional-398000" to be "Ready" ...
	I0914 15:06:23.310940    2793 start.go:890] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0914 15:06:23.330872    2793 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 15:06:23.458823    2793 node_ready.go:49] node "functional-398000" has status "Ready":"True"
	I0914 15:06:23.458840    2793 node_ready.go:38] duration metric: took 147.902958ms waiting for node "functional-398000" to be "Ready" ...
	I0914 15:06:23.458844    2793 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 15:06:23.550619    2793 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0914 15:06:23.558558    2793 addons.go:502] enable addons completed in 286.5715ms: enabled=[storage-provisioner default-storageclass]
	I0914 15:06:23.659961    2793 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-kws7l" in "kube-system" namespace to be "Ready" ...
	I0914 15:06:24.058532    2793 pod_ready.go:92] pod "coredns-5dd5756b68-kws7l" in "kube-system" namespace has status "Ready":"True"
	I0914 15:06:24.058537    2793 pod_ready.go:81] duration metric: took 398.582125ms waiting for pod "coredns-5dd5756b68-kws7l" in "kube-system" namespace to be "Ready" ...
	I0914 15:06:24.058541    2793 pod_ready.go:78] waiting up to 6m0s for pod "etcd-functional-398000" in "kube-system" namespace to be "Ready" ...
	I0914 15:06:24.458445    2793 pod_ready.go:92] pod "etcd-functional-398000" in "kube-system" namespace has status "Ready":"True"
	I0914 15:06:24.458450    2793 pod_ready.go:81] duration metric: took 399.9165ms waiting for pod "etcd-functional-398000" in "kube-system" namespace to be "Ready" ...
	I0914 15:06:24.458455    2793 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-functional-398000" in "kube-system" namespace to be "Ready" ...
	I0914 15:06:24.858478    2793 pod_ready.go:92] pod "kube-apiserver-functional-398000" in "kube-system" namespace has status "Ready":"True"
	I0914 15:06:24.858485    2793 pod_ready.go:81] duration metric: took 400.037167ms waiting for pod "kube-apiserver-functional-398000" in "kube-system" namespace to be "Ready" ...
	I0914 15:06:24.858489    2793 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-functional-398000" in "kube-system" namespace to be "Ready" ...
	I0914 15:06:25.258676    2793 pod_ready.go:92] pod "kube-controller-manager-functional-398000" in "kube-system" namespace has status "Ready":"True"
	I0914 15:06:25.258682    2793 pod_ready.go:81] duration metric: took 400.199667ms waiting for pod "kube-controller-manager-functional-398000" in "kube-system" namespace to be "Ready" ...
	I0914 15:06:25.258686    2793 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vvbs7" in "kube-system" namespace to be "Ready" ...
	I0914 15:06:25.658513    2793 pod_ready.go:92] pod "kube-proxy-vvbs7" in "kube-system" namespace has status "Ready":"True"
	I0914 15:06:25.658520    2793 pod_ready.go:81] duration metric: took 399.841ms waiting for pod "kube-proxy-vvbs7" in "kube-system" namespace to be "Ready" ...
	I0914 15:06:25.658524    2793 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-functional-398000" in "kube-system" namespace to be "Ready" ...
	I0914 15:06:26.057040    2793 pod_ready.go:92] pod "kube-scheduler-functional-398000" in "kube-system" namespace has status "Ready":"True"
	I0914 15:06:26.057045    2793 pod_ready.go:81] duration metric: took 398.528292ms waiting for pod "kube-scheduler-functional-398000" in "kube-system" namespace to be "Ready" ...
	I0914 15:06:26.057048    2793 pod_ready.go:38] duration metric: took 2.598264959s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 15:06:26.057063    2793 api_server.go:52] waiting for apiserver process to appear ...
	I0914 15:06:26.057168    2793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 15:06:26.062118    2793 api_server.go:72] duration metric: took 2.788037542s to wait for apiserver process to appear ...
	I0914 15:06:26.062122    2793 api_server.go:88] waiting for apiserver healthz status ...
	I0914 15:06:26.062129    2793 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0914 15:06:26.065752    2793 api_server.go:279] https://192.168.105.4:8441/healthz returned 200:
	ok
	I0914 15:06:26.066583    2793 api_server.go:141] control plane version: v1.28.1
	I0914 15:06:26.066586    2793 api_server.go:131] duration metric: took 4.462458ms to wait for apiserver health ...
	I0914 15:06:26.066589    2793 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 15:06:26.259728    2793 system_pods.go:59] 6 kube-system pods found
	I0914 15:06:26.259733    2793 system_pods.go:61] "coredns-5dd5756b68-kws7l" [303cfff0-b738-480e-8109-c41092fc0de7] Running
	I0914 15:06:26.259736    2793 system_pods.go:61] "etcd-functional-398000" [58b40509-b883-4c79-9406-14ce077f564f] Running
	I0914 15:06:26.259738    2793 system_pods.go:61] "kube-apiserver-functional-398000" [2ecfb982-da9f-4587-b334-d31829aed648] Running
	I0914 15:06:26.259739    2793 system_pods.go:61] "kube-controller-manager-functional-398000" [3536f371-529f-48a2-a844-84bda81fd7ed] Running
	I0914 15:06:26.259741    2793 system_pods.go:61] "kube-proxy-vvbs7" [ef8383c7-8450-48b7-9bbf-a7cb13a545be] Running
	I0914 15:06:26.259742    2793 system_pods.go:61] "kube-scheduler-functional-398000" [ee71633f-934e-42e3-8662-a659e9b30606] Running
	I0914 15:06:26.259745    2793 system_pods.go:74] duration metric: took 193.158667ms to wait for pod list to return data ...
	I0914 15:06:26.259747    2793 default_sa.go:34] waiting for default service account to be created ...
	I0914 15:06:26.458551    2793 default_sa.go:45] found service account: "default"
	I0914 15:06:26.458558    2793 default_sa.go:55] duration metric: took 198.813041ms for default service account to be created ...
	I0914 15:06:26.458561    2793 system_pods.go:116] waiting for k8s-apps to be running ...
	I0914 15:06:26.660102    2793 system_pods.go:86] 6 kube-system pods found
	I0914 15:06:26.660108    2793 system_pods.go:89] "coredns-5dd5756b68-kws7l" [303cfff0-b738-480e-8109-c41092fc0de7] Running
	I0914 15:06:26.660110    2793 system_pods.go:89] "etcd-functional-398000" [58b40509-b883-4c79-9406-14ce077f564f] Running
	I0914 15:06:26.660112    2793 system_pods.go:89] "kube-apiserver-functional-398000" [2ecfb982-da9f-4587-b334-d31829aed648] Running
	I0914 15:06:26.660115    2793 system_pods.go:89] "kube-controller-manager-functional-398000" [3536f371-529f-48a2-a844-84bda81fd7ed] Running
	I0914 15:06:26.660116    2793 system_pods.go:89] "kube-proxy-vvbs7" [ef8383c7-8450-48b7-9bbf-a7cb13a545be] Running
	I0914 15:06:26.660118    2793 system_pods.go:89] "kube-scheduler-functional-398000" [ee71633f-934e-42e3-8662-a659e9b30606] Running
	I0914 15:06:26.660120    2793 system_pods.go:126] duration metric: took 201.562917ms to wait for k8s-apps to be running ...
	I0914 15:06:26.660123    2793 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 15:06:26.660200    2793 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 15:06:26.666086    2793 system_svc.go:56] duration metric: took 5.96125ms WaitForService to wait for kubelet.
	I0914 15:06:26.666091    2793 kubeadm.go:581] duration metric: took 3.392026458s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0914 15:06:26.666099    2793 node_conditions.go:102] verifying NodePressure condition ...
	I0914 15:06:26.858168    2793 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0914 15:06:26.858175    2793 node_conditions.go:123] node cpu capacity is 2
	I0914 15:06:26.858180    2793 node_conditions.go:105] duration metric: took 192.08375ms to run NodePressure ...
	I0914 15:06:26.858185    2793 start.go:228] waiting for startup goroutines ...
	I0914 15:06:26.858188    2793 start.go:233] waiting for cluster config update ...
	I0914 15:06:26.858192    2793 start.go:242] writing updated cluster config ...
	I0914 15:06:26.858611    2793 ssh_runner.go:195] Run: rm -f paused
	I0914 15:06:26.887408    2793 start.go:600] kubectl: 1.27.2, cluster: 1.28.1 (minor skew: 1)
	I0914 15:06:26.890769    2793 out.go:177] * Done! kubectl is now configured to use "functional-398000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Thu 2023-09-14 22:04:38 UTC, ends at Thu 2023-09-14 22:07:14 UTC. --
	Sep 14 22:06:47 functional-398000 cri-dockerd[6569]: time="2023-09-14T22:06:47Z" level=info msg="Stop pulling image registry.k8s.io/echoserver-arm:1.8: Status: Downloaded newer image for registry.k8s.io/echoserver-arm:1.8"
	Sep 14 22:06:47 functional-398000 dockerd[6313]: time="2023-09-14T22:06:47.099107449Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 14 22:06:47 functional-398000 dockerd[6313]: time="2023-09-14T22:06:47.099135281Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 14 22:06:47 functional-398000 dockerd[6313]: time="2023-09-14T22:06:47.099145405Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 14 22:06:47 functional-398000 dockerd[6313]: time="2023-09-14T22:06:47.099151572Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 14 22:06:47 functional-398000 dockerd[6307]: time="2023-09-14T22:06:47.138570152Z" level=info msg="ignoring event" container=897ea5348a4e73f76a7182cf0c932cdec56f480439588e9899a409a377077d8f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 14 22:06:47 functional-398000 dockerd[6313]: time="2023-09-14T22:06:47.138579818Z" level=info msg="shim disconnected" id=897ea5348a4e73f76a7182cf0c932cdec56f480439588e9899a409a377077d8f namespace=moby
	Sep 14 22:06:47 functional-398000 dockerd[6313]: time="2023-09-14T22:06:47.138896885Z" level=warning msg="cleaning up after shim disconnected" id=897ea5348a4e73f76a7182cf0c932cdec56f480439588e9899a409a377077d8f namespace=moby
	Sep 14 22:06:47 functional-398000 dockerd[6313]: time="2023-09-14T22:06:47.138908551Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 14 22:06:47 functional-398000 dockerd[6313]: time="2023-09-14T22:06:47.545078221Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 14 22:06:47 functional-398000 dockerd[6313]: time="2023-09-14T22:06:47.545108677Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 14 22:06:47 functional-398000 dockerd[6313]: time="2023-09-14T22:06:47.545225713Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 14 22:06:47 functional-398000 dockerd[6313]: time="2023-09-14T22:06:47.545353415Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 14 22:06:47 functional-398000 dockerd[6307]: time="2023-09-14T22:06:47.591310691Z" level=info msg="ignoring event" container=c93ecffd41904daa793936357cbf27413a3df08c16075c0fbe5eb09d94cf2afa module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 14 22:06:47 functional-398000 dockerd[6313]: time="2023-09-14T22:06:47.591530096Z" level=info msg="shim disconnected" id=c93ecffd41904daa793936357cbf27413a3df08c16075c0fbe5eb09d94cf2afa namespace=moby
	Sep 14 22:06:47 functional-398000 dockerd[6313]: time="2023-09-14T22:06:47.591556886Z" level=warning msg="cleaning up after shim disconnected" id=c93ecffd41904daa793936357cbf27413a3df08c16075c0fbe5eb09d94cf2afa namespace=moby
	Sep 14 22:06:47 functional-398000 dockerd[6313]: time="2023-09-14T22:06:47.591560969Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 14 22:07:05 functional-398000 dockerd[6313]: time="2023-09-14T22:07:05.268807745Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 14 22:07:05 functional-398000 dockerd[6313]: time="2023-09-14T22:07:05.268845869Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 14 22:07:05 functional-398000 dockerd[6313]: time="2023-09-14T22:07:05.268854910Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 14 22:07:05 functional-398000 dockerd[6313]: time="2023-09-14T22:07:05.268861160Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 14 22:07:05 functional-398000 dockerd[6307]: time="2023-09-14T22:07:05.316596910Z" level=info msg="ignoring event" container=288d26c97910eae0c39bfbb31fb73e6b1f6ca2e294c39ef16da1ce4e7de4f8e5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 14 22:07:05 functional-398000 dockerd[6313]: time="2023-09-14T22:07:05.316830235Z" level=info msg="shim disconnected" id=288d26c97910eae0c39bfbb31fb73e6b1f6ca2e294c39ef16da1ce4e7de4f8e5 namespace=moby
	Sep 14 22:07:05 functional-398000 dockerd[6313]: time="2023-09-14T22:07:05.316873442Z" level=warning msg="cleaning up after shim disconnected" id=288d26c97910eae0c39bfbb31fb73e6b1f6ca2e294c39ef16da1ce4e7de4f8e5 namespace=moby
	Sep 14 22:07:05 functional-398000 dockerd[6313]: time="2023-09-14T22:07:05.316877942Z" level=info msg="cleaning up dead shim" namespace=moby
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                           CREATED              STATE               NAME                      ATTEMPT             POD ID
	288d26c97910e       72565bf5bbedf                                                                   9 seconds ago        Exited              echoserver-arm            2                   f498266e07f79
	c386b23db23e3       nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70   38 seconds ago       Running             nginx                     0                   de456e52e5307
	b860c7fb80078       97e04611ad434                                                                   About a minute ago   Running             coredns                   2                   f22dbd22efc86
	00d5a8d446781       812f5241df7fd                                                                   About a minute ago   Running             kube-proxy                2                   08fb4aa5fd702
	27e65ce8b06ea       b4a5a57e99492                                                                   About a minute ago   Running             kube-scheduler            2                   53a4185973ca2
	5d78ec6933fb1       9cdd6470f48c8                                                                   About a minute ago   Running             etcd                      2                   6a468539b06d5
	b1ebafd7bfb74       b29fb62480892                                                                   About a minute ago   Running             kube-apiserver            0                   a20e702e38f80
	a9108297ffcf3       8b6e1980b7584                                                                   About a minute ago   Running             kube-controller-manager   2                   14a97ed1d5b34
	ced7f3bb1f36c       b4a5a57e99492                                                                   About a minute ago   Exited              kube-scheduler            1                   b43acab7a6d9a
	bb8f303a73ac4       9cdd6470f48c8                                                                   About a minute ago   Exited              etcd                      1                   00a1d00793183
	7cd12005ec6a5       8b6e1980b7584                                                                   About a minute ago   Exited              kube-controller-manager   1                   d96c262a83749
	fe836ba5c6a85       812f5241df7fd                                                                   About a minute ago   Exited              kube-proxy                1                   39f241352e4d4
	99a190bec20ca       97e04611ad434                                                                   About a minute ago   Exited              coredns                   1                   71d5d5337566d
	
	* 
	* ==> coredns [99a190bec20c] <==
	* [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:41226 - 2540 "HINFO IN 3398687459190580947.5821229649940585176. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.004500859s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> coredns [b860c7fb8007] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:35439 - 34679 "HINFO IN 7769998063386430892.3783104102286343921. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.004943768s
	[INFO] 10.244.0.1:10002 - 48046 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 4096" NOERROR qr,aa,rd 104 0.000098749s
	[INFO] 10.244.0.1:24573 - 55585 "AAAA IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 146 0.000096041s
	[INFO] 10.244.0.1:18231 - 47578 "A IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 104 0.000025958s
	[INFO] 10.244.0.1:15227 - 7535 "SVCB IN _dns.resolver.arpa. udp 36 false 512" NXDOMAIN qr,rd,ra 116 0.001000206s
	[INFO] 10.244.0.1:8155 - 60009 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 104 0.00008025s
	[INFO] 10.244.0.1:37791 - 47737 "AAAA IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 146 0.000111083s
	
	* 
	* ==> describe nodes <==
	* Name:               functional-398000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-398000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d7492f2ae2d9b6e62b385ffcd97ebad62c645e82
	                    minikube.k8s.io/name=functional-398000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_14T15_04_54_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 14 Sep 2023 22:04:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-398000
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 14 Sep 2023 22:07:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 14 Sep 2023 22:07:08 +0000   Thu, 14 Sep 2023 22:04:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 14 Sep 2023 22:07:08 +0000   Thu, 14 Sep 2023 22:04:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 14 Sep 2023 22:07:08 +0000   Thu, 14 Sep 2023 22:04:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 14 Sep 2023 22:07:08 +0000   Thu, 14 Sep 2023 22:04:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.4
	  Hostname:    functional-398000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	System Info:
	  Machine ID:                 83d273568dbb484f86c90a787a13c422
	  System UUID:                83d273568dbb484f86c90a787a13c422
	  Boot ID:                    ce2754fd-8ddf-45a9-87a9-284ccd819cb4
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.6
	  Kubelet Version:            v1.28.1
	  Kube-Proxy Version:         v1.28.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-connect-7799dfb7c6-d6spp          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         31s
	  default                     nginx-svc                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         41s
	  kube-system                 coredns-5dd5756b68-kws7l                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     2m5s
	  kube-system                 etcd-functional-398000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         2m20s
	  kube-system                 kube-apiserver-functional-398000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         67s
	  kube-system                 kube-controller-manager-functional-398000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m20s
	  kube-system                 kube-proxy-vvbs7                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m6s
	  kube-system                 kube-scheduler-functional-398000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 66s                    kube-proxy       
	  Normal  Starting                 105s                   kube-proxy       
	  Normal  Starting                 2m5s                   kube-proxy       
	  Normal  NodeAllocatableEnforced  2m25s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 2m25s                  kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    2m24s (x8 over 2m25s)  kubelet          Node functional-398000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m24s (x7 over 2m25s)  kubelet          Node functional-398000 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  2m24s (x8 over 2m25s)  kubelet          Node functional-398000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientMemory  2m20s                  kubelet          Node functional-398000 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  2m20s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 2m20s                  kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    2m20s                  kubelet          Node functional-398000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m20s                  kubelet          Node functional-398000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                2m16s                  kubelet          Node functional-398000 status is now: NodeReady
	  Normal  RegisteredNode           2m6s                   node-controller  Node functional-398000 event: Registered Node functional-398000 in Controller
	  Normal  RegisteredNode           93s                    node-controller  Node functional-398000 event: Registered Node functional-398000 in Controller
	  Normal  NodeHasNoDiskPressure    70s (x8 over 70s)      kubelet          Node functional-398000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  70s (x8 over 70s)      kubelet          Node functional-398000 status is now: NodeHasSufficientMemory
	  Normal  Starting                 70s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     70s (x7 over 70s)      kubelet          Node functional-398000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  70s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           56s                    node-controller  Node functional-398000 event: Registered Node functional-398000 in Controller
	
	* 
	* ==> dmesg <==
	* [  +2.424980] systemd-fstab-generator[3671]: Ignoring "noauto" for root device
	[  +0.149586] systemd-fstab-generator[3704]: Ignoring "noauto" for root device
	[  +0.096786] systemd-fstab-generator[3715]: Ignoring "noauto" for root device
	[  +0.094421] systemd-fstab-generator[3728]: Ignoring "noauto" for root device
	[  +5.146047] kauditd_printk_skb: 23 callbacks suppressed
	[  +6.243145] systemd-fstab-generator[4238]: Ignoring "noauto" for root device
	[  +0.100509] systemd-fstab-generator[4305]: Ignoring "noauto" for root device
	[  +0.106345] systemd-fstab-generator[4385]: Ignoring "noauto" for root device
	[  +0.102020] systemd-fstab-generator[4418]: Ignoring "noauto" for root device
	[  +0.115231] systemd-fstab-generator[4490]: Ignoring "noauto" for root device
	[  +4.939186] kauditd_printk_skb: 34 callbacks suppressed
	[ +22.179518] systemd-fstab-generator[5888]: Ignoring "noauto" for root device
	[  +0.153309] systemd-fstab-generator[5921]: Ignoring "noauto" for root device
	[  +0.099499] systemd-fstab-generator[5932]: Ignoring "noauto" for root device
	[  +0.090804] systemd-fstab-generator[5945]: Ignoring "noauto" for root device
	[Sep14 22:06] systemd-fstab-generator[6458]: Ignoring "noauto" for root device
	[  +0.084092] systemd-fstab-generator[6469]: Ignoring "noauto" for root device
	[  +0.084837] systemd-fstab-generator[6480]: Ignoring "noauto" for root device
	[  +0.085960] systemd-fstab-generator[6491]: Ignoring "noauto" for root device
	[  +0.122048] systemd-fstab-generator[6562]: Ignoring "noauto" for root device
	[  +0.900773] systemd-fstab-generator[6809]: Ignoring "noauto" for root device
	[  +3.826634] kauditd_printk_skb: 34 callbacks suppressed
	[ +25.618689] kauditd_printk_skb: 11 callbacks suppressed
	[  +9.983692] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.829662] TCP: eth0: Driver has suspect GRO implementation, TCP performance may be compromised.
	
	* 
	* ==> etcd [5d78ec6933fb] <==
	* {"level":"info","ts":"2023-09-14T22:06:05.020009Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-09-14T22:06:05.020039Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2023-09-14T22:06:05.020088Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-14T22:06:05.020097Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-14T22:06:05.0201Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-14T22:06:05.020168Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-09-14T22:06:05.020171Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-09-14T22:06:05.020379Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 switched to configuration voters=(527499358918876438)"}
	{"level":"info","ts":"2023-09-14T22:06:05.020401Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","added-peer-id":"7520ddf439b1d16","added-peer-peer-urls":["https://192.168.105.4:2380"]}
	{"level":"info","ts":"2023-09-14T22:06:05.020433Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-14T22:06:05.020443Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-14T22:06:05.99662Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 3"}
	{"level":"info","ts":"2023-09-14T22:06:05.996674Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 3"}
	{"level":"info","ts":"2023-09-14T22:06:05.996729Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2023-09-14T22:06:05.996745Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 4"}
	{"level":"info","ts":"2023-09-14T22:06:05.996842Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2023-09-14T22:06:05.996853Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 4"}
	{"level":"info","ts":"2023-09-14T22:06:05.996881Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2023-09-14T22:06:05.998256Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-14T22:06:05.998368Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-14T22:06:05.999198Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-09-14T22:06:05.999205Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2023-09-14T22:06:05.998258Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-398000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2023-09-14T22:06:05.999296Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-14T22:06:06.001195Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> etcd [bb8f303a73ac] <==
	* {"level":"info","ts":"2023-09-14T22:05:26.931093Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-14T22:05:28.178993Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 2"}
	{"level":"info","ts":"2023-09-14T22:05:28.179221Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-09-14T22:05:28.179272Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 2"}
	{"level":"info","ts":"2023-09-14T22:05:28.179303Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 3"}
	{"level":"info","ts":"2023-09-14T22:05:28.179326Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2023-09-14T22:05:28.179354Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 3"}
	{"level":"info","ts":"2023-09-14T22:05:28.1794Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2023-09-14T22:05:28.181915Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-14T22:05:28.18192Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-398000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2023-09-14T22:05:28.182626Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-14T22:05:28.184468Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2023-09-14T22:05:28.185646Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-09-14T22:05:28.185995Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-14T22:05:28.186039Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-09-14T22:05:51.569901Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-09-14T22:05:51.569924Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"functional-398000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	{"level":"warn","ts":"2023-09-14T22:05:51.569955Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-09-14T22:05:51.570005Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-09-14T22:05:51.577717Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-09-14T22:05:51.577741Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"info","ts":"2023-09-14T22:05:51.578942Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"7520ddf439b1d16","current-leader-member-id":"7520ddf439b1d16"}
	{"level":"info","ts":"2023-09-14T22:05:51.580076Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-09-14T22:05:51.580105Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-09-14T22:05:51.58011Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"functional-398000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	
	* 
	* ==> kernel <==
	*  22:07:14 up 2 min,  0 users,  load average: 0.59, 0.39, 0.16
	Linux functional-398000 5.10.57 #1 SMP PREEMPT Wed Sep 13 19:05:16 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [b1ebafd7bfb7] <==
	* I0914 22:06:06.636368       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0914 22:06:06.636850       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0914 22:06:06.636900       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0914 22:06:06.636889       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0914 22:06:06.637460       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0914 22:06:06.637587       1 shared_informer.go:318] Caches are synced for configmaps
	I0914 22:06:06.644100       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0914 22:06:06.644136       1 aggregator.go:166] initial CRD sync complete...
	I0914 22:06:06.644152       1 autoregister_controller.go:141] Starting autoregister controller
	I0914 22:06:06.644159       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0914 22:06:06.644161       1 cache.go:39] Caches are synced for autoregister controller
	I0914 22:06:06.682012       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0914 22:06:07.542588       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0914 22:06:07.648418       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.105.4]
	I0914 22:06:07.648982       1 controller.go:624] quota admission added evaluator for: endpoints
	I0914 22:06:07.652120       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0914 22:06:07.723308       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0914 22:06:07.727183       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0914 22:06:07.744395       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0914 22:06:07.757113       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0914 22:06:07.760308       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0914 22:06:28.305447       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.102.175.88"}
	I0914 22:06:33.131267       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.103.117.53"}
	I0914 22:06:43.516530       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0914 22:06:43.559600       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.99.1.28"}
	
	* 
	* ==> kube-controller-manager [7cd12005ec6a] <==
	* I0914 22:05:41.757114       1 shared_informer.go:318] Caches are synced for ephemeral
	I0914 22:05:41.804499       1 shared_informer.go:318] Caches are synced for daemon sets
	I0914 22:05:41.804522       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0914 22:05:41.806551       1 shared_informer.go:318] Caches are synced for persistent volume
	I0914 22:05:41.806635       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I0914 22:05:41.806987       1 shared_informer.go:318] Caches are synced for service account
	I0914 22:05:41.807042       1 shared_informer.go:318] Caches are synced for TTL
	I0914 22:05:41.809073       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0914 22:05:41.809087       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I0914 22:05:41.809132       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I0914 22:05:41.809160       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I0914 22:05:41.814391       1 shared_informer.go:318] Caches are synced for endpoint
	I0914 22:05:41.814476       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0914 22:05:41.814516       1 shared_informer.go:318] Caches are synced for node
	I0914 22:05:41.814565       1 range_allocator.go:174] "Sending events to api server"
	I0914 22:05:41.814592       1 range_allocator.go:178] "Starting range CIDR allocator"
	I0914 22:05:41.814606       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I0914 22:05:41.814640       1 shared_informer.go:318] Caches are synced for cidrallocator
	I0914 22:05:41.857071       1 shared_informer.go:318] Caches are synced for PV protection
	I0914 22:05:41.913246       1 shared_informer.go:318] Caches are synced for resource quota
	I0914 22:05:41.945595       1 shared_informer.go:318] Caches are synced for resource quota
	I0914 22:05:41.956418       1 shared_informer.go:318] Caches are synced for attach detach
	I0914 22:05:42.327421       1 shared_informer.go:318] Caches are synced for garbage collector
	I0914 22:05:42.332753       1 shared_informer.go:318] Caches are synced for garbage collector
	I0914 22:05:42.332795       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	
	* 
	* ==> kube-controller-manager [a9108297ffcf] <==
	* I0914 22:06:18.905799       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I0914 22:06:18.905801       1 shared_informer.go:318] Caches are synced for cidrallocator
	I0914 22:06:18.907253       1 shared_informer.go:318] Caches are synced for PVC protection
	I0914 22:06:18.910464       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0914 22:06:18.910530       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="29.416µs"
	I0914 22:06:18.913740       1 shared_informer.go:318] Caches are synced for deployment
	I0914 22:06:18.914813       1 shared_informer.go:318] Caches are synced for TTL after finished
	I0914 22:06:18.918005       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0914 22:06:19.011379       1 shared_informer.go:318] Caches are synced for stateful set
	I0914 22:06:19.024682       1 shared_informer.go:318] Caches are synced for resource quota
	I0914 22:06:19.082303       1 shared_informer.go:318] Caches are synced for daemon sets
	I0914 22:06:19.107011       1 shared_informer.go:318] Caches are synced for resource quota
	I0914 22:06:19.409438       1 shared_informer.go:318] Caches are synced for garbage collector
	I0914 22:06:19.409519       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0914 22:06:19.423537       1 shared_informer.go:318] Caches are synced for garbage collector
	I0914 22:06:43.518668       1 event.go:307] "Event occurred" object="default/hello-node-connect" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-node-connect-7799dfb7c6 to 1"
	I0914 22:06:43.525659       1 event.go:307] "Event occurred" object="default/hello-node-connect-7799dfb7c6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-node-connect-7799dfb7c6-d6spp"
	I0914 22:06:43.529074       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-connect-7799dfb7c6" duration="10.260769ms"
	I0914 22:06:43.546070       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-connect-7799dfb7c6" duration="16.96988ms"
	I0914 22:06:43.551610       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-connect-7799dfb7c6" duration="5.48703ms"
	I0914 22:06:43.551641       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-connect-7799dfb7c6" duration="14.666µs"
	I0914 22:06:47.523542       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-connect-7799dfb7c6" duration="52.247µs"
	I0914 22:06:48.530875       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-connect-7799dfb7c6" duration="115.327µs"
	I0914 22:06:49.536998       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-connect-7799dfb7c6" duration="51.331µs"
	I0914 22:07:05.602638       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-connect-7799dfb7c6" duration="43.124µs"
	
	* 
	* ==> kube-proxy [00d5a8d44678] <==
	* I0914 22:06:07.816355       1 server_others.go:69] "Using iptables proxy"
	I0914 22:06:07.820664       1 node.go:141] Successfully retrieved node IP: 192.168.105.4
	I0914 22:06:07.845781       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0914 22:06:07.845795       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0914 22:06:07.847332       1 server_others.go:152] "Using iptables Proxier"
	I0914 22:06:07.847352       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0914 22:06:07.847414       1 server.go:846] "Version info" version="v1.28.1"
	I0914 22:06:07.847419       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 22:06:07.848575       1 config.go:188] "Starting service config controller"
	I0914 22:06:07.848596       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0914 22:06:07.848605       1 config.go:97] "Starting endpoint slice config controller"
	I0914 22:06:07.848607       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0914 22:06:07.848878       1 config.go:315] "Starting node config controller"
	I0914 22:06:07.848880       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0914 22:06:07.949411       1 shared_informer.go:318] Caches are synced for node config
	I0914 22:06:07.949481       1 shared_informer.go:318] Caches are synced for service config
	I0914 22:06:07.949518       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-proxy [fe836ba5c6a8] <==
	* I0914 22:05:27.200738       1 server_others.go:69] "Using iptables proxy"
	I0914 22:05:28.838300       1 node.go:141] Successfully retrieved node IP: 192.168.105.4
	I0914 22:05:28.852531       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0914 22:05:28.852564       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0914 22:05:28.853277       1 server_others.go:152] "Using iptables Proxier"
	I0914 22:05:28.853310       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0914 22:05:28.853369       1 server.go:846] "Version info" version="v1.28.1"
	I0914 22:05:28.853373       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 22:05:28.853778       1 config.go:315] "Starting node config controller"
	I0914 22:05:28.853782       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0914 22:05:28.853939       1 config.go:188] "Starting service config controller"
	I0914 22:05:28.853941       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0914 22:05:28.853946       1 config.go:97] "Starting endpoint slice config controller"
	I0914 22:05:28.853948       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0914 22:05:28.954393       1 shared_informer.go:318] Caches are synced for node config
	I0914 22:05:28.954420       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0914 22:05:28.954432       1 shared_informer.go:318] Caches are synced for service config
	
	* 
	* ==> kube-scheduler [27e65ce8b06e] <==
	* I0914 22:06:05.580685       1 serving.go:348] Generated self-signed cert in-memory
	W0914 22:06:06.569342       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0914 22:06:06.569405       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0914 22:06:06.569422       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0914 22:06:06.569450       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0914 22:06:06.605543       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.1"
	I0914 22:06:06.605850       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 22:06:06.606744       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0914 22:06:06.606956       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0914 22:06:06.606964       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0914 22:06:06.606977       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0914 22:06:06.707892       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [ced7f3bb1f36] <==
	* I0914 22:05:27.427407       1 serving.go:348] Generated self-signed cert in-memory
	W0914 22:05:28.797309       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0914 22:05:28.797325       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0914 22:05:28.797329       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0914 22:05:28.797332       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0914 22:05:28.837226       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.1"
	I0914 22:05:28.837536       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 22:05:28.838643       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0914 22:05:28.838657       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0914 22:05:28.838949       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0914 22:05:28.839738       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0914 22:05:28.939539       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0914 22:05:51.550928       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0914 22:05:51.550993       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	I0914 22:05:51.551057       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0914 22:05:51.551158       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Thu 2023-09-14 22:04:38 UTC, ends at Thu 2023-09-14 22:07:14 UTC. --
	Sep 14 22:06:31 functional-398000 kubelet[6815]: I0914 22:06:31.647691    6815 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4203d5a-62db-4b13-8f3f-8707ba242d29-kube-api-access-xmdb8" (OuterVolumeSpecName: "kube-api-access-xmdb8") pod "f4203d5a-62db-4b13-8f3f-8707ba242d29" (UID: "f4203d5a-62db-4b13-8f3f-8707ba242d29"). InnerVolumeSpecName "kube-api-access-xmdb8". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 14 22:06:31 functional-398000 kubelet[6815]: I0914 22:06:31.746000    6815 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-xmdb8\" (UniqueName: \"kubernetes.io/projected/f4203d5a-62db-4b13-8f3f-8707ba242d29-kube-api-access-xmdb8\") on node \"functional-398000\" DevicePath \"\""
	Sep 14 22:06:33 functional-398000 kubelet[6815]: I0914 22:06:33.127852    6815 topology_manager.go:215] "Topology Admit Handler" podUID="3b4d8f18-d4cc-45f6-9574-a88f0cdb0809" podNamespace="default" podName="nginx-svc"
	Sep 14 22:06:33 functional-398000 kubelet[6815]: E0914 22:06:33.127882    6815 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2711197e24eb5168635b351a1f87e222" containerName="kube-apiserver"
	Sep 14 22:06:33 functional-398000 kubelet[6815]: I0914 22:06:33.127897    6815 memory_manager.go:346] "RemoveStaleState removing state" podUID="2711197e24eb5168635b351a1f87e222" containerName="kube-apiserver"
	Sep 14 22:06:33 functional-398000 kubelet[6815]: I0914 22:06:33.253498    6815 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmjq9\" (UniqueName: \"kubernetes.io/projected/3b4d8f18-d4cc-45f6-9574-a88f0cdb0809-kube-api-access-wmjq9\") pod \"nginx-svc\" (UID: \"3b4d8f18-d4cc-45f6-9574-a88f0cdb0809\") " pod="default/nginx-svc"
	Sep 14 22:06:34 functional-398000 kubelet[6815]: I0914 22:06:34.247333    6815 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="f4203d5a-62db-4b13-8f3f-8707ba242d29" path="/var/lib/kubelet/pods/f4203d5a-62db-4b13-8f3f-8707ba242d29/volumes"
	Sep 14 22:06:37 functional-398000 kubelet[6815]: I0914 22:06:37.477483    6815 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-svc" podStartSLOduration=1.643032075 podCreationTimestamp="2023-09-14 22:06:33 +0000 UTC" firstStartedPulling="2023-09-14 22:06:33.595252677 +0000 UTC m=+29.416025220" lastFinishedPulling="2023-09-14 22:06:36.429679376 +0000 UTC m=+32.250451918" observedRunningTime="2023-09-14 22:06:37.477013274 +0000 UTC m=+33.297785775" watchObservedRunningTime="2023-09-14 22:06:37.477458773 +0000 UTC m=+33.298231316"
	Sep 14 22:06:43 functional-398000 kubelet[6815]: I0914 22:06:43.528510    6815 topology_manager.go:215] "Topology Admit Handler" podUID="17388bde-57a9-4d82-ac4a-1ca198a0a870" podNamespace="default" podName="hello-node-connect-7799dfb7c6-d6spp"
	Sep 14 22:06:43 functional-398000 kubelet[6815]: I0914 22:06:43.619282    6815 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwrjn\" (UniqueName: \"kubernetes.io/projected/17388bde-57a9-4d82-ac4a-1ca198a0a870-kube-api-access-qwrjn\") pod \"hello-node-connect-7799dfb7c6-d6spp\" (UID: \"17388bde-57a9-4d82-ac4a-1ca198a0a870\") " pod="default/hello-node-connect-7799dfb7c6-d6spp"
	Sep 14 22:06:47 functional-398000 kubelet[6815]: I0914 22:06:47.517910    6815 scope.go:117] "RemoveContainer" containerID="897ea5348a4e73f76a7182cf0c932cdec56f480439588e9899a409a377077d8f"
	Sep 14 22:06:48 functional-398000 kubelet[6815]: I0914 22:06:48.524944    6815 scope.go:117] "RemoveContainer" containerID="897ea5348a4e73f76a7182cf0c932cdec56f480439588e9899a409a377077d8f"
	Sep 14 22:06:48 functional-398000 kubelet[6815]: I0914 22:06:48.525102    6815 scope.go:117] "RemoveContainer" containerID="c93ecffd41904daa793936357cbf27413a3df08c16075c0fbe5eb09d94cf2afa"
	Sep 14 22:06:48 functional-398000 kubelet[6815]: E0914 22:06:48.525209    6815 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 10s restarting failed container=echoserver-arm pod=hello-node-connect-7799dfb7c6-d6spp_default(17388bde-57a9-4d82-ac4a-1ca198a0a870)\"" pod="default/hello-node-connect-7799dfb7c6-d6spp" podUID="17388bde-57a9-4d82-ac4a-1ca198a0a870"
	Sep 14 22:06:49 functional-398000 kubelet[6815]: I0914 22:06:49.531602    6815 scope.go:117] "RemoveContainer" containerID="c93ecffd41904daa793936357cbf27413a3df08c16075c0fbe5eb09d94cf2afa"
	Sep 14 22:06:49 functional-398000 kubelet[6815]: E0914 22:06:49.531695    6815 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 10s restarting failed container=echoserver-arm pod=hello-node-connect-7799dfb7c6-d6spp_default(17388bde-57a9-4d82-ac4a-1ca198a0a870)\"" pod="default/hello-node-connect-7799dfb7c6-d6spp" podUID="17388bde-57a9-4d82-ac4a-1ca198a0a870"
	Sep 14 22:07:04 functional-398000 kubelet[6815]: E0914 22:07:04.249375    6815 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 14 22:07:04 functional-398000 kubelet[6815]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 14 22:07:04 functional-398000 kubelet[6815]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 14 22:07:04 functional-398000 kubelet[6815]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 14 22:07:04 functional-398000 kubelet[6815]: I0914 22:07:04.321982    6815 scope.go:117] "RemoveContainer" containerID="747a721aa0e28565205d41c7401ef0a9cf79c625b0c48ca5fae9d6eec322fe99"
	Sep 14 22:07:05 functional-398000 kubelet[6815]: I0914 22:07:05.245680    6815 scope.go:117] "RemoveContainer" containerID="c93ecffd41904daa793936357cbf27413a3df08c16075c0fbe5eb09d94cf2afa"
	Sep 14 22:07:05 functional-398000 kubelet[6815]: I0914 22:07:05.597761    6815 scope.go:117] "RemoveContainer" containerID="c93ecffd41904daa793936357cbf27413a3df08c16075c0fbe5eb09d94cf2afa"
	Sep 14 22:07:05 functional-398000 kubelet[6815]: I0914 22:07:05.597889    6815 scope.go:117] "RemoveContainer" containerID="288d26c97910eae0c39bfbb31fb73e6b1f6ca2e294c39ef16da1ce4e7de4f8e5"
	Sep 14 22:07:05 functional-398000 kubelet[6815]: E0914 22:07:05.598225    6815 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-7799dfb7c6-d6spp_default(17388bde-57a9-4d82-ac4a-1ca198a0a870)\"" pod="default/hello-node-connect-7799dfb7c6-d6spp" podUID="17388bde-57a9-4d82-ac4a-1ca198a0a870"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p functional-398000 -n functional-398000
helpers_test.go:261: (dbg) Run:  kubectl --context functional-398000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (31.51s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (240.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
functional_test_pvc_test.go:44: ***** TestFunctional/parallel/PersistentVolumeClaim: pod "integration-test=storage-provisioner" failed to start within 4m0s: context deadline exceeded ****
functional_test_pvc_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p functional-398000 -n functional-398000
functional_test_pvc_test.go:44: TestFunctional/parallel/PersistentVolumeClaim: showing logs for failed pods as of 2023-09-14 15:10:32.874464 -0700 PDT m=+2100.076329334
functional_test_pvc_test.go:45: failed waiting for storage-provisioner: integration-test=storage-provisioner within 4m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-398000 -n functional-398000
helpers_test.go:244: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 logs -n 25
helpers_test.go:252: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|----------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|    Command     |                           Args                           |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|----------------|----------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| image          | functional-398000 image save                             | functional-398000 | jenkins | v1.31.2 | 14 Sep 23 15:07 PDT | 14 Sep 23 15:07 PDT |
	|                | gcr.io/google-containers/addon-resizer:functional-398000 |                   |         |         |                     |                     |
	|                | /Users/jenkins/workspace/addon-resizer-save.tar          |                   |         |         |                     |                     |
	|                | --alsologtostderr                                        |                   |         |         |                     |                     |
	| image          | functional-398000 image rm                               | functional-398000 | jenkins | v1.31.2 | 14 Sep 23 15:07 PDT | 14 Sep 23 15:07 PDT |
	|                | gcr.io/google-containers/addon-resizer:functional-398000 |                   |         |         |                     |                     |
	|                | --alsologtostderr                                        |                   |         |         |                     |                     |
	| image          | functional-398000 image ls                               | functional-398000 | jenkins | v1.31.2 | 14 Sep 23 15:07 PDT | 14 Sep 23 15:07 PDT |
	| image          | functional-398000 image load                             | functional-398000 | jenkins | v1.31.2 | 14 Sep 23 15:07 PDT | 14 Sep 23 15:07 PDT |
	|                | /Users/jenkins/workspace/addon-resizer-save.tar          |                   |         |         |                     |                     |
	|                | --alsologtostderr                                        |                   |         |         |                     |                     |
	| image          | functional-398000 image ls                               | functional-398000 | jenkins | v1.31.2 | 14 Sep 23 15:07 PDT | 14 Sep 23 15:07 PDT |
	| image          | functional-398000 image save --daemon                    | functional-398000 | jenkins | v1.31.2 | 14 Sep 23 15:07 PDT | 14 Sep 23 15:07 PDT |
	|                | gcr.io/google-containers/addon-resizer:functional-398000 |                   |         |         |                     |                     |
	|                | --alsologtostderr                                        |                   |         |         |                     |                     |
	| ssh            | functional-398000 ssh sudo cat                           | functional-398000 | jenkins | v1.31.2 | 14 Sep 23 15:07 PDT | 14 Sep 23 15:07 PDT |
	|                | /etc/ssl/certs/1425.pem                                  |                   |         |         |                     |                     |
	| ssh            | functional-398000 ssh sudo cat                           | functional-398000 | jenkins | v1.31.2 | 14 Sep 23 15:07 PDT | 14 Sep 23 15:07 PDT |
	|                | /usr/share/ca-certificates/1425.pem                      |                   |         |         |                     |                     |
	| ssh            | functional-398000 ssh sudo cat                           | functional-398000 | jenkins | v1.31.2 | 14 Sep 23 15:07 PDT | 14 Sep 23 15:07 PDT |
	|                | /etc/ssl/certs/51391683.0                                |                   |         |         |                     |                     |
	| ssh            | functional-398000 ssh sudo cat                           | functional-398000 | jenkins | v1.31.2 | 14 Sep 23 15:07 PDT | 14 Sep 23 15:07 PDT |
	|                | /etc/ssl/certs/14252.pem                                 |                   |         |         |                     |                     |
	| ssh            | functional-398000 ssh sudo cat                           | functional-398000 | jenkins | v1.31.2 | 14 Sep 23 15:07 PDT | 14 Sep 23 15:07 PDT |
	|                | /usr/share/ca-certificates/14252.pem                     |                   |         |         |                     |                     |
	| ssh            | functional-398000 ssh sudo cat                           | functional-398000 | jenkins | v1.31.2 | 14 Sep 23 15:07 PDT | 14 Sep 23 15:07 PDT |
	|                | /etc/ssl/certs/3ec20f2e.0                                |                   |         |         |                     |                     |
	| docker-env     | functional-398000 docker-env                             | functional-398000 | jenkins | v1.31.2 | 14 Sep 23 15:07 PDT | 14 Sep 23 15:07 PDT |
	| docker-env     | functional-398000 docker-env                             | functional-398000 | jenkins | v1.31.2 | 14 Sep 23 15:07 PDT | 14 Sep 23 15:07 PDT |
	| ssh            | functional-398000 ssh sudo cat                           | functional-398000 | jenkins | v1.31.2 | 14 Sep 23 15:07 PDT | 14 Sep 23 15:07 PDT |
	|                | /etc/test/nested/copy/1425/hosts                         |                   |         |         |                     |                     |
	| image          | functional-398000                                        | functional-398000 | jenkins | v1.31.2 | 14 Sep 23 15:07 PDT | 14 Sep 23 15:07 PDT |
	|                | image ls --format short                                  |                   |         |         |                     |                     |
	|                | --alsologtostderr                                        |                   |         |         |                     |                     |
	| image          | functional-398000                                        | functional-398000 | jenkins | v1.31.2 | 14 Sep 23 15:07 PDT | 14 Sep 23 15:07 PDT |
	|                | image ls --format yaml                                   |                   |         |         |                     |                     |
	|                | --alsologtostderr                                        |                   |         |         |                     |                     |
	| image          | functional-398000                                        | functional-398000 | jenkins | v1.31.2 | 14 Sep 23 15:07 PDT | 14 Sep 23 15:07 PDT |
	|                | image ls --format json                                   |                   |         |         |                     |                     |
	|                | --alsologtostderr                                        |                   |         |         |                     |                     |
	| image          | functional-398000                                        | functional-398000 | jenkins | v1.31.2 | 14 Sep 23 15:07 PDT | 14 Sep 23 15:07 PDT |
	|                | image ls --format table                                  |                   |         |         |                     |                     |
	|                | --alsologtostderr                                        |                   |         |         |                     |                     |
	| ssh            | functional-398000 ssh pgrep                              | functional-398000 | jenkins | v1.31.2 | 14 Sep 23 15:07 PDT |                     |
	|                | buildkitd                                                |                   |         |         |                     |                     |
	| image          | functional-398000 image build -t                         | functional-398000 | jenkins | v1.31.2 | 14 Sep 23 15:07 PDT | 14 Sep 23 15:07 PDT |
	|                | localhost/my-image:functional-398000                     |                   |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                         |                   |         |         |                     |                     |
	| image          | functional-398000 image ls                               | functional-398000 | jenkins | v1.31.2 | 14 Sep 23 15:07 PDT | 14 Sep 23 15:07 PDT |
	| update-context | functional-398000                                        | functional-398000 | jenkins | v1.31.2 | 14 Sep 23 15:07 PDT | 14 Sep 23 15:07 PDT |
	|                | update-context                                           |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                   |                   |         |         |                     |                     |
	| update-context | functional-398000                                        | functional-398000 | jenkins | v1.31.2 | 14 Sep 23 15:07 PDT | 14 Sep 23 15:07 PDT |
	|                | update-context                                           |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                   |                   |         |         |                     |                     |
	| update-context | functional-398000                                        | functional-398000 | jenkins | v1.31.2 | 14 Sep 23 15:07 PDT | 14 Sep 23 15:07 PDT |
	|                | update-context                                           |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                   |                   |         |         |                     |                     |
	|----------------|----------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/14 15:07:30
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.21.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0914 15:07:30.183702    3017 out.go:296] Setting OutFile to fd 1 ...
	I0914 15:07:30.183807    3017 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 15:07:30.183810    3017 out.go:309] Setting ErrFile to fd 2...
	I0914 15:07:30.183812    3017 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 15:07:30.183926    3017 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17243-1006/.minikube/bin
	I0914 15:07:30.184918    3017 out.go:303] Setting JSON to false
	I0914 15:07:30.200628    3017 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2224,"bootTime":1694727026,"procs":412,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0914 15:07:30.200710    3017 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0914 15:07:30.204350    3017 out.go:177] * [functional-398000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0914 15:07:30.211344    3017 out.go:177]   - MINIKUBE_LOCATION=17243
	I0914 15:07:30.211477    3017 notify.go:220] Checking for updates...
	I0914 15:07:30.219414    3017 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17243-1006/kubeconfig
	I0914 15:07:30.223731    3017 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 15:07:30.231334    3017 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 15:07:30.241221    3017 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17243-1006/.minikube
	I0914 15:07:30.248360    3017 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 15:07:30.254661    3017 config.go:182] Loaded profile config "functional-398000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0914 15:07:30.254911    3017 driver.go:373] Setting default libvirt URI to qemu:///system
	I0914 15:07:30.258314    3017 out.go:177] * Using the qemu2 driver based on existing profile
	I0914 15:07:30.265280    3017 start.go:298] selected driver: qemu2
	I0914 15:07:30.265286    3017 start.go:902] validating driver "qemu2" against &{Name:functional-398000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.1 ClusterName:functional-398000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false E
xtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 15:07:30.265352    3017 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 15:07:30.267441    3017 cni.go:84] Creating CNI manager for ""
	I0914 15:07:30.267457    3017 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0914 15:07:30.267463    3017 start_flags.go:321] config:
	{Name:functional-398000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:functional-398000 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 15:07:30.275285    3017 out.go:177] * dry-run validation complete!
	
	* 
	* ==> Docker <==
	* -- Journal begins at Thu 2023-09-14 22:04:38 UTC, ends at Thu 2023-09-14 22:10:33 UTC. --
	Sep 14 22:08:38 functional-398000 dockerd[6313]: time="2023-09-14T22:08:38.280691494Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 14 22:08:38 functional-398000 dockerd[6313]: time="2023-09-14T22:08:38.280723285Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 14 22:08:38 functional-398000 dockerd[6313]: time="2023-09-14T22:08:38.280739868Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 14 22:08:38 functional-398000 dockerd[6313]: time="2023-09-14T22:08:38.280745035Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 14 22:08:38 functional-398000 dockerd[6307]: time="2023-09-14T22:08:38.315098104Z" level=info msg="ignoring event" container=83a241d5407781cc1799bb8560490083fe4a808a537b3fe2fd9e8593253d7fd3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 14 22:08:38 functional-398000 dockerd[6313]: time="2023-09-14T22:08:38.315335186Z" level=info msg="shim disconnected" id=83a241d5407781cc1799bb8560490083fe4a808a537b3fe2fd9e8593253d7fd3 namespace=moby
	Sep 14 22:08:38 functional-398000 dockerd[6313]: time="2023-09-14T22:08:38.315580308Z" level=warning msg="cleaning up after shim disconnected" id=83a241d5407781cc1799bb8560490083fe4a808a537b3fe2fd9e8593253d7fd3 namespace=moby
	Sep 14 22:08:38 functional-398000 dockerd[6313]: time="2023-09-14T22:08:38.315585183Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 14 22:09:41 functional-398000 dockerd[6313]: time="2023-09-14T22:09:41.291876874Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 14 22:09:41 functional-398000 dockerd[6313]: time="2023-09-14T22:09:41.291909415Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 14 22:09:41 functional-398000 dockerd[6313]: time="2023-09-14T22:09:41.292249656Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 14 22:09:41 functional-398000 dockerd[6313]: time="2023-09-14T22:09:41.292282614Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 14 22:09:41 functional-398000 dockerd[6313]: time="2023-09-14T22:09:41.331502693Z" level=info msg="shim disconnected" id=3f8b8523f8855e308b128371d8e43589508c58f3e3ecba54baf6d8210340e4a2 namespace=moby
	Sep 14 22:09:41 functional-398000 dockerd[6313]: time="2023-09-14T22:09:41.331533567Z" level=warning msg="cleaning up after shim disconnected" id=3f8b8523f8855e308b128371d8e43589508c58f3e3ecba54baf6d8210340e4a2 namespace=moby
	Sep 14 22:09:41 functional-398000 dockerd[6313]: time="2023-09-14T22:09:41.331537984Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 14 22:09:41 functional-398000 dockerd[6307]: time="2023-09-14T22:09:41.331850059Z" level=info msg="ignoring event" container=3f8b8523f8855e308b128371d8e43589508c58f3e3ecba54baf6d8210340e4a2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 14 22:10:04 functional-398000 dockerd[6313]: time="2023-09-14T22:10:04.266827765Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 14 22:10:04 functional-398000 dockerd[6313]: time="2023-09-14T22:10:04.266856973Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 14 22:10:04 functional-398000 dockerd[6313]: time="2023-09-14T22:10:04.266866389Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 14 22:10:04 functional-398000 dockerd[6313]: time="2023-09-14T22:10:04.266872556Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 14 22:10:04 functional-398000 dockerd[6307]: time="2023-09-14T22:10:04.302306956Z" level=info msg="ignoring event" container=60cd2e6a6a05f4a8b8b2ae83dbc67ac8bc1bd707844234c44b2a6208eab97eeb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 14 22:10:04 functional-398000 dockerd[6313]: time="2023-09-14T22:10:04.302390455Z" level=info msg="shim disconnected" id=60cd2e6a6a05f4a8b8b2ae83dbc67ac8bc1bd707844234c44b2a6208eab97eeb namespace=moby
	Sep 14 22:10:04 functional-398000 dockerd[6313]: time="2023-09-14T22:10:04.302426037Z" level=warning msg="cleaning up after shim disconnected" id=60cd2e6a6a05f4a8b8b2ae83dbc67ac8bc1bd707844234c44b2a6208eab97eeb namespace=moby
	Sep 14 22:10:04 functional-398000 dockerd[6313]: time="2023-09-14T22:10:04.302430245Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 14 22:10:04 functional-398000 dockerd[6313]: time="2023-09-14T22:10:04.310942738Z" level=warning msg="cleanup warnings time=\"2023-09-14T22:10:04Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                  CREATED             STATE               NAME                        ATTEMPT             POD ID
	60cd2e6a6a05f       72565bf5bbedf                                                                                          29 seconds ago      Exited              echoserver-arm              5                   6e48b3959a215
	3f8b8523f8855       72565bf5bbedf                                                                                          52 seconds ago      Exited              echoserver-arm              5                   f498266e07f79
	4dc5655ac3104       kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93         2 minutes ago       Running             kubernetes-dashboard        0                   fa8c4595f9202
	40908ca0a1085       kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c   3 minutes ago       Running             dashboard-metrics-scraper   0                   9f6ca2f9112fe
	50215e1d1661f       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e    3 minutes ago       Exited              mount-munger                0                   676952d79afc8
	c386b23db23e3       nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70                          3 minutes ago       Running             nginx                       0                   de456e52e5307
	b860c7fb80078       97e04611ad434                                                                                          4 minutes ago       Running             coredns                     2                   f22dbd22efc86
	00d5a8d446781       812f5241df7fd                                                                                          4 minutes ago       Running             kube-proxy                  2                   08fb4aa5fd702
	27e65ce8b06ea       b4a5a57e99492                                                                                          4 minutes ago       Running             kube-scheduler              2                   53a4185973ca2
	5d78ec6933fb1       9cdd6470f48c8                                                                                          4 minutes ago       Running             etcd                        2                   6a468539b06d5
	a9108297ffcf3       8b6e1980b7584                                                                                          4 minutes ago       Running             kube-controller-manager     2                   14a97ed1d5b34
	b1ebafd7bfb74       b29fb62480892                                                                                          4 minutes ago       Running             kube-apiserver              0                   a20e702e38f80
	ced7f3bb1f36c       b4a5a57e99492                                                                                          5 minutes ago       Exited              kube-scheduler              1                   b43acab7a6d9a
	bb8f303a73ac4       9cdd6470f48c8                                                                                          5 minutes ago       Exited              etcd                        1                   00a1d00793183
	7cd12005ec6a5       8b6e1980b7584                                                                                          5 minutes ago       Exited              kube-controller-manager     1                   d96c262a83749
	fe836ba5c6a85       812f5241df7fd                                                                                          5 minutes ago       Exited              kube-proxy                  1                   39f241352e4d4
	99a190bec20ca       97e04611ad434                                                                                          5 minutes ago       Exited              coredns                     1                   71d5d5337566d
	
	* 
	* ==> coredns [99a190bec20c] <==
	* [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:41226 - 2540 "HINFO IN 3398687459190580947.5821229649940585176. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.004500859s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> coredns [b860c7fb8007] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:35439 - 34679 "HINFO IN 7769998063386430892.3783104102286343921. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.004943768s
	[INFO] 10.244.0.1:10002 - 48046 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 4096" NOERROR qr,aa,rd 104 0.000098749s
	[INFO] 10.244.0.1:24573 - 55585 "AAAA IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 146 0.000096041s
	[INFO] 10.244.0.1:18231 - 47578 "A IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 104 0.000025958s
	[INFO] 10.244.0.1:15227 - 7535 "SVCB IN _dns.resolver.arpa. udp 36 false 512" NXDOMAIN qr,rd,ra 116 0.001000206s
	[INFO] 10.244.0.1:8155 - 60009 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 104 0.00008025s
	[INFO] 10.244.0.1:37791 - 47737 "AAAA IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 146 0.000111083s
	
	* 
	* ==> describe nodes <==
	* Name:               functional-398000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-398000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d7492f2ae2d9b6e62b385ffcd97ebad62c645e82
	                    minikube.k8s.io/name=functional-398000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_14T15_04_54_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 14 Sep 2023 22:04:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-398000
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 14 Sep 2023 22:10:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 14 Sep 2023 22:08:09 +0000   Thu, 14 Sep 2023 22:04:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 14 Sep 2023 22:08:09 +0000   Thu, 14 Sep 2023 22:04:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 14 Sep 2023 22:08:09 +0000   Thu, 14 Sep 2023 22:04:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 14 Sep 2023 22:08:09 +0000   Thu, 14 Sep 2023 22:04:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.4
	  Hostname:    functional-398000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	System Info:
	  Machine ID:                 83d273568dbb484f86c90a787a13c422
	  System UUID:                83d273568dbb484f86c90a787a13c422
	  Boot ID:                    ce2754fd-8ddf-45a9-87a9-284ccd819cb4
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.6
	  Kubelet Version:            v1.28.1
	  Kube-Proxy Version:         v1.28.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-759d89bdcc-qb8dx                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m18s
	  default                     hello-node-connect-7799dfb7c6-d6spp           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m50s
	  default                     nginx-svc                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m
	  kube-system                 coredns-5dd5756b68-kws7l                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     5m24s
	  kube-system                 etcd-functional-398000                        100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         5m39s
	  kube-system                 kube-apiserver-functional-398000              250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m26s
	  kube-system                 kube-controller-manager-functional-398000     200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m39s
	  kube-system                 kube-proxy-vvbs7                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m25s
	  kube-system                 kube-scheduler-functional-398000              100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m39s
	  kubernetes-dashboard        dashboard-metrics-scraper-7fd5cb4ddc-4rf85    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m3s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-jpzgt         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m25s                  kube-proxy       
	  Normal  Starting                 5m4s                   kube-proxy       
	  Normal  Starting                 5m23s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  5m44s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 5m44s                  kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    5m43s (x8 over 5m44s)  kubelet          Node functional-398000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m43s (x7 over 5m44s)  kubelet          Node functional-398000 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  5m43s (x8 over 5m44s)  kubelet          Node functional-398000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientMemory  5m39s                  kubelet          Node functional-398000 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  5m39s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 5m39s                  kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    5m39s                  kubelet          Node functional-398000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m39s                  kubelet          Node functional-398000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                5m35s                  kubelet          Node functional-398000 status is now: NodeReady
	  Normal  RegisteredNode           5m25s                  node-controller  Node functional-398000 event: Registered Node functional-398000 in Controller
	  Normal  RegisteredNode           4m52s                  node-controller  Node functional-398000 event: Registered Node functional-398000 in Controller
	  Normal  NodeHasNoDiskPressure    4m29s (x8 over 4m29s)  kubelet          Node functional-398000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  4m29s (x8 over 4m29s)  kubelet          Node functional-398000 status is now: NodeHasSufficientMemory
	  Normal  Starting                 4m29s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     4m29s (x7 over 4m29s)  kubelet          Node functional-398000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m29s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m15s                  node-controller  Node functional-398000 event: Registered Node functional-398000 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.094421] systemd-fstab-generator[3728]: Ignoring "noauto" for root device
	[  +5.146047] kauditd_printk_skb: 23 callbacks suppressed
	[  +6.243145] systemd-fstab-generator[4238]: Ignoring "noauto" for root device
	[  +0.100509] systemd-fstab-generator[4305]: Ignoring "noauto" for root device
	[  +0.106345] systemd-fstab-generator[4385]: Ignoring "noauto" for root device
	[  +0.102020] systemd-fstab-generator[4418]: Ignoring "noauto" for root device
	[  +0.115231] systemd-fstab-generator[4490]: Ignoring "noauto" for root device
	[  +4.939186] kauditd_printk_skb: 34 callbacks suppressed
	[ +22.179518] systemd-fstab-generator[5888]: Ignoring "noauto" for root device
	[  +0.153309] systemd-fstab-generator[5921]: Ignoring "noauto" for root device
	[  +0.099499] systemd-fstab-generator[5932]: Ignoring "noauto" for root device
	[  +0.090804] systemd-fstab-generator[5945]: Ignoring "noauto" for root device
	[Sep14 22:06] systemd-fstab-generator[6458]: Ignoring "noauto" for root device
	[  +0.084092] systemd-fstab-generator[6469]: Ignoring "noauto" for root device
	[  +0.084837] systemd-fstab-generator[6480]: Ignoring "noauto" for root device
	[  +0.085960] systemd-fstab-generator[6491]: Ignoring "noauto" for root device
	[  +0.122048] systemd-fstab-generator[6562]: Ignoring "noauto" for root device
	[  +0.900773] systemd-fstab-generator[6809]: Ignoring "noauto" for root device
	[  +3.826634] kauditd_printk_skb: 34 callbacks suppressed
	[ +25.618689] kauditd_printk_skb: 11 callbacks suppressed
	[  +9.983692] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.829662] TCP: eth0: Driver has suspect GRO implementation, TCP performance may be compromised.
	[Sep14 22:07] kauditd_printk_skb: 1 callbacks suppressed
	[  +8.654754] kauditd_printk_skb: 1 callbacks suppressed
	[ +15.091943] kauditd_printk_skb: 8 callbacks suppressed
	
	* 
	* ==> etcd [5d78ec6933fb] <==
	* {"level":"info","ts":"2023-09-14T22:06:05.020009Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-09-14T22:06:05.020039Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2023-09-14T22:06:05.020088Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-14T22:06:05.020097Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-14T22:06:05.0201Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-14T22:06:05.020168Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-09-14T22:06:05.020171Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-09-14T22:06:05.020379Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 switched to configuration voters=(527499358918876438)"}
	{"level":"info","ts":"2023-09-14T22:06:05.020401Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","added-peer-id":"7520ddf439b1d16","added-peer-peer-urls":["https://192.168.105.4:2380"]}
	{"level":"info","ts":"2023-09-14T22:06:05.020433Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-14T22:06:05.020443Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-14T22:06:05.99662Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 3"}
	{"level":"info","ts":"2023-09-14T22:06:05.996674Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 3"}
	{"level":"info","ts":"2023-09-14T22:06:05.996729Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2023-09-14T22:06:05.996745Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 4"}
	{"level":"info","ts":"2023-09-14T22:06:05.996842Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2023-09-14T22:06:05.996853Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 4"}
	{"level":"info","ts":"2023-09-14T22:06:05.996881Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2023-09-14T22:06:05.998256Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-14T22:06:05.998368Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-14T22:06:05.999198Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-09-14T22:06:05.999205Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2023-09-14T22:06:05.998258Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-398000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2023-09-14T22:06:05.999296Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-14T22:06:06.001195Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> etcd [bb8f303a73ac] <==
	* {"level":"info","ts":"2023-09-14T22:05:26.931093Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-14T22:05:28.178993Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 2"}
	{"level":"info","ts":"2023-09-14T22:05:28.179221Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-09-14T22:05:28.179272Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 2"}
	{"level":"info","ts":"2023-09-14T22:05:28.179303Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 3"}
	{"level":"info","ts":"2023-09-14T22:05:28.179326Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2023-09-14T22:05:28.179354Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 3"}
	{"level":"info","ts":"2023-09-14T22:05:28.1794Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2023-09-14T22:05:28.181915Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-14T22:05:28.18192Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-398000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2023-09-14T22:05:28.182626Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-14T22:05:28.184468Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2023-09-14T22:05:28.185646Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-09-14T22:05:28.185995Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-14T22:05:28.186039Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-09-14T22:05:51.569901Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-09-14T22:05:51.569924Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"functional-398000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	{"level":"warn","ts":"2023-09-14T22:05:51.569955Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-09-14T22:05:51.570005Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-09-14T22:05:51.577717Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-09-14T22:05:51.577741Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"info","ts":"2023-09-14T22:05:51.578942Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"7520ddf439b1d16","current-leader-member-id":"7520ddf439b1d16"}
	{"level":"info","ts":"2023-09-14T22:05:51.580076Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-09-14T22:05:51.580105Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-09-14T22:05:51.58011Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"functional-398000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	
	* 
	* ==> kernel <==
	*  22:10:33 up 5 min,  0 users,  load average: 0.20, 0.28, 0.16
	Linux functional-398000 5.10.57 #1 SMP PREEMPT Wed Sep 13 19:05:16 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [b1ebafd7bfb7] <==
	* I0914 22:06:06.637460       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0914 22:06:06.637587       1 shared_informer.go:318] Caches are synced for configmaps
	I0914 22:06:06.644100       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0914 22:06:06.644136       1 aggregator.go:166] initial CRD sync complete...
	I0914 22:06:06.644152       1 autoregister_controller.go:141] Starting autoregister controller
	I0914 22:06:06.644159       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0914 22:06:06.644161       1 cache.go:39] Caches are synced for autoregister controller
	I0914 22:06:06.682012       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0914 22:06:07.542588       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0914 22:06:07.648418       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.105.4]
	I0914 22:06:07.648982       1 controller.go:624] quota admission added evaluator for: endpoints
	I0914 22:06:07.652120       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0914 22:06:07.723308       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0914 22:06:07.727183       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0914 22:06:07.744395       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0914 22:06:07.757113       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0914 22:06:07.760308       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0914 22:06:28.305447       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.102.175.88"}
	I0914 22:06:33.131267       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.103.117.53"}
	I0914 22:06:43.516530       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0914 22:06:43.559600       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.99.1.28"}
	I0914 22:07:15.065077       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.104.228.24"}
	I0914 22:07:30.798927       1 controller.go:624] quota admission added evaluator for: namespaces
	I0914 22:07:30.896908       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.102.229.14"}
	I0914 22:07:30.906045       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.103.69.64"}
	
	* 
	* ==> kube-controller-manager [7cd12005ec6a] <==
	* I0914 22:05:41.757114       1 shared_informer.go:318] Caches are synced for ephemeral
	I0914 22:05:41.804499       1 shared_informer.go:318] Caches are synced for daemon sets
	I0914 22:05:41.804522       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0914 22:05:41.806551       1 shared_informer.go:318] Caches are synced for persistent volume
	I0914 22:05:41.806635       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I0914 22:05:41.806987       1 shared_informer.go:318] Caches are synced for service account
	I0914 22:05:41.807042       1 shared_informer.go:318] Caches are synced for TTL
	I0914 22:05:41.809073       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0914 22:05:41.809087       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I0914 22:05:41.809132       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I0914 22:05:41.809160       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I0914 22:05:41.814391       1 shared_informer.go:318] Caches are synced for endpoint
	I0914 22:05:41.814476       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0914 22:05:41.814516       1 shared_informer.go:318] Caches are synced for node
	I0914 22:05:41.814565       1 range_allocator.go:174] "Sending events to api server"
	I0914 22:05:41.814592       1 range_allocator.go:178] "Starting range CIDR allocator"
	I0914 22:05:41.814606       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I0914 22:05:41.814640       1 shared_informer.go:318] Caches are synced for cidrallocator
	I0914 22:05:41.857071       1 shared_informer.go:318] Caches are synced for PV protection
	I0914 22:05:41.913246       1 shared_informer.go:318] Caches are synced for resource quota
	I0914 22:05:41.945595       1 shared_informer.go:318] Caches are synced for resource quota
	I0914 22:05:41.956418       1 shared_informer.go:318] Caches are synced for attach detach
	I0914 22:05:42.327421       1 shared_informer.go:318] Caches are synced for garbage collector
	I0914 22:05:42.332753       1 shared_informer.go:318] Caches are synced for garbage collector
	I0914 22:05:42.332795       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	
	* 
	* ==> kube-controller-manager [a9108297ffcf] <==
	* I0914 22:07:30.868715       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="6.111758ms"
	I0914 22:07:30.875587       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="6.824994ms"
	I0914 22:07:30.875617       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="12.208µs"
	I0914 22:07:30.876093       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-jpzgt"
	I0914 22:07:30.884164       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="13.417µs"
	I0914 22:07:30.884576       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="15.738699ms"
	I0914 22:07:30.889094       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="4.498455ms"
	I0914 22:07:30.889222       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="107.873µs"
	I0914 22:07:30.891531       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="31.582µs"
	I0914 22:07:33.786451       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="2.896698ms"
	I0914 22:07:33.786514       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="16.291µs"
	I0914 22:07:38.810487       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="3.094533ms"
	I0914 22:07:38.810684       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="19.5µs"
	I0914 22:07:42.255026       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-759d89bdcc" duration="22.958µs"
	I0914 22:07:43.250753       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-connect-7799dfb7c6" duration="24.792µs"
	I0914 22:07:56.896113       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-759d89bdcc" duration="26.374µs"
	I0914 22:08:10.963217       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-connect-7799dfb7c6" duration="26.875µs"
	I0914 22:08:11.251090       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-759d89bdcc" duration="31.625µs"
	I0914 22:08:23.254449       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-connect-7799dfb7c6" duration="34.875µs"
	I0914 22:08:39.099100       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-759d89bdcc" duration="39.167µs"
	I0914 22:08:53.249858       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-759d89bdcc" duration="42.208µs"
	I0914 22:09:41.379643       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-connect-7799dfb7c6" duration="39.666µs"
	I0914 22:09:54.252582       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-connect-7799dfb7c6" duration="45.249µs"
	I0914 22:10:04.493958       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-759d89bdcc" duration="23.416µs"
	I0914 22:10:19.254462       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-759d89bdcc" duration="28.207µs"
	
	* 
	* ==> kube-proxy [00d5a8d44678] <==
	* I0914 22:06:07.816355       1 server_others.go:69] "Using iptables proxy"
	I0914 22:06:07.820664       1 node.go:141] Successfully retrieved node IP: 192.168.105.4
	I0914 22:06:07.845781       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0914 22:06:07.845795       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0914 22:06:07.847332       1 server_others.go:152] "Using iptables Proxier"
	I0914 22:06:07.847352       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0914 22:06:07.847414       1 server.go:846] "Version info" version="v1.28.1"
	I0914 22:06:07.847419       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 22:06:07.848575       1 config.go:188] "Starting service config controller"
	I0914 22:06:07.848596       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0914 22:06:07.848605       1 config.go:97] "Starting endpoint slice config controller"
	I0914 22:06:07.848607       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0914 22:06:07.848878       1 config.go:315] "Starting node config controller"
	I0914 22:06:07.848880       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0914 22:06:07.949411       1 shared_informer.go:318] Caches are synced for node config
	I0914 22:06:07.949481       1 shared_informer.go:318] Caches are synced for service config
	I0914 22:06:07.949518       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-proxy [fe836ba5c6a8] <==
	* I0914 22:05:27.200738       1 server_others.go:69] "Using iptables proxy"
	I0914 22:05:28.838300       1 node.go:141] Successfully retrieved node IP: 192.168.105.4
	I0914 22:05:28.852531       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0914 22:05:28.852564       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0914 22:05:28.853277       1 server_others.go:152] "Using iptables Proxier"
	I0914 22:05:28.853310       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0914 22:05:28.853369       1 server.go:846] "Version info" version="v1.28.1"
	I0914 22:05:28.853373       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 22:05:28.853778       1 config.go:315] "Starting node config controller"
	I0914 22:05:28.853782       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0914 22:05:28.853939       1 config.go:188] "Starting service config controller"
	I0914 22:05:28.853941       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0914 22:05:28.853946       1 config.go:97] "Starting endpoint slice config controller"
	I0914 22:05:28.853948       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0914 22:05:28.954393       1 shared_informer.go:318] Caches are synced for node config
	I0914 22:05:28.954420       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0914 22:05:28.954432       1 shared_informer.go:318] Caches are synced for service config
	
	* 
	* ==> kube-scheduler [27e65ce8b06e] <==
	* I0914 22:06:05.580685       1 serving.go:348] Generated self-signed cert in-memory
	W0914 22:06:06.569342       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0914 22:06:06.569405       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0914 22:06:06.569422       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0914 22:06:06.569450       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0914 22:06:06.605543       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.1"
	I0914 22:06:06.605850       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 22:06:06.606744       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0914 22:06:06.606956       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0914 22:06:06.606964       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0914 22:06:06.606977       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0914 22:06:06.707892       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [ced7f3bb1f36] <==
	* I0914 22:05:27.427407       1 serving.go:348] Generated self-signed cert in-memory
	W0914 22:05:28.797309       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0914 22:05:28.797325       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0914 22:05:28.797329       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0914 22:05:28.797332       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0914 22:05:28.837226       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.1"
	I0914 22:05:28.837536       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 22:05:28.838643       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0914 22:05:28.838657       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0914 22:05:28.838949       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0914 22:05:28.839738       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0914 22:05:28.939539       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0914 22:05:51.550928       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0914 22:05:51.550993       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	I0914 22:05:51.551057       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0914 22:05:51.551158       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Thu 2023-09-14 22:04:38 UTC, ends at Thu 2023-09-14 22:10:33 UTC. --
	Sep 14 22:09:40 functional-398000 kubelet[6815]: E0914 22:09:40.246922    6815 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=echoserver-arm pod=hello-node-759d89bdcc-qb8dx_default(b4b51dff-689e-491b-8fc5-59f7380892a7)\"" pod="default/hello-node-759d89bdcc-qb8dx" podUID="b4b51dff-689e-491b-8fc5-59f7380892a7"
	Sep 14 22:09:41 functional-398000 kubelet[6815]: I0914 22:09:41.246983    6815 scope.go:117] "RemoveContainer" containerID="35ebfe948c9ee338ccdc808e2b27958a0023a358b7e1298a988018b8274deb92"
	Sep 14 22:09:41 functional-398000 kubelet[6815]: I0914 22:09:41.375240    6815 scope.go:117] "RemoveContainer" containerID="35ebfe948c9ee338ccdc808e2b27958a0023a358b7e1298a988018b8274deb92"
	Sep 14 22:09:41 functional-398000 kubelet[6815]: I0914 22:09:41.375355    6815 scope.go:117] "RemoveContainer" containerID="3f8b8523f8855e308b128371d8e43589508c58f3e3ecba54baf6d8210340e4a2"
	Sep 14 22:09:41 functional-398000 kubelet[6815]: E0914 22:09:41.375433    6815 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=echoserver-arm pod=hello-node-connect-7799dfb7c6-d6spp_default(17388bde-57a9-4d82-ac4a-1ca198a0a870)\"" pod="default/hello-node-connect-7799dfb7c6-d6spp" podUID="17388bde-57a9-4d82-ac4a-1ca198a0a870"
	Sep 14 22:09:53 functional-398000 kubelet[6815]: I0914 22:09:53.246292    6815 scope.go:117] "RemoveContainer" containerID="83a241d5407781cc1799bb8560490083fe4a808a537b3fe2fd9e8593253d7fd3"
	Sep 14 22:09:53 functional-398000 kubelet[6815]: E0914 22:09:53.246402    6815 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=echoserver-arm pod=hello-node-759d89bdcc-qb8dx_default(b4b51dff-689e-491b-8fc5-59f7380892a7)\"" pod="default/hello-node-759d89bdcc-qb8dx" podUID="b4b51dff-689e-491b-8fc5-59f7380892a7"
	Sep 14 22:09:54 functional-398000 kubelet[6815]: I0914 22:09:54.246551    6815 scope.go:117] "RemoveContainer" containerID="3f8b8523f8855e308b128371d8e43589508c58f3e3ecba54baf6d8210340e4a2"
	Sep 14 22:09:54 functional-398000 kubelet[6815]: E0914 22:09:54.246950    6815 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=echoserver-arm pod=hello-node-connect-7799dfb7c6-d6spp_default(17388bde-57a9-4d82-ac4a-1ca198a0a870)\"" pod="default/hello-node-connect-7799dfb7c6-d6spp" podUID="17388bde-57a9-4d82-ac4a-1ca198a0a870"
	Sep 14 22:10:04 functional-398000 kubelet[6815]: I0914 22:10:04.245881    6815 scope.go:117] "RemoveContainer" containerID="83a241d5407781cc1799bb8560490083fe4a808a537b3fe2fd9e8593253d7fd3"
	Sep 14 22:10:04 functional-398000 kubelet[6815]: E0914 22:10:04.250365    6815 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 14 22:10:04 functional-398000 kubelet[6815]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 14 22:10:04 functional-398000 kubelet[6815]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 14 22:10:04 functional-398000 kubelet[6815]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 14 22:10:04 functional-398000 kubelet[6815]: I0914 22:10:04.355602    6815 scope.go:117] "RemoveContainer" containerID="83a241d5407781cc1799bb8560490083fe4a808a537b3fe2fd9e8593253d7fd3"
	Sep 14 22:10:04 functional-398000 kubelet[6815]: I0914 22:10:04.489975    6815 scope.go:117] "RemoveContainer" containerID="60cd2e6a6a05f4a8b8b2ae83dbc67ac8bc1bd707844234c44b2a6208eab97eeb"
	Sep 14 22:10:04 functional-398000 kubelet[6815]: E0914 22:10:04.490064    6815 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=echoserver-arm pod=hello-node-759d89bdcc-qb8dx_default(b4b51dff-689e-491b-8fc5-59f7380892a7)\"" pod="default/hello-node-759d89bdcc-qb8dx" podUID="b4b51dff-689e-491b-8fc5-59f7380892a7"
	Sep 14 22:10:08 functional-398000 kubelet[6815]: I0914 22:10:08.246340    6815 scope.go:117] "RemoveContainer" containerID="3f8b8523f8855e308b128371d8e43589508c58f3e3ecba54baf6d8210340e4a2"
	Sep 14 22:10:08 functional-398000 kubelet[6815]: E0914 22:10:08.246478    6815 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=echoserver-arm pod=hello-node-connect-7799dfb7c6-d6spp_default(17388bde-57a9-4d82-ac4a-1ca198a0a870)\"" pod="default/hello-node-connect-7799dfb7c6-d6spp" podUID="17388bde-57a9-4d82-ac4a-1ca198a0a870"
	Sep 14 22:10:19 functional-398000 kubelet[6815]: I0914 22:10:19.249437    6815 scope.go:117] "RemoveContainer" containerID="60cd2e6a6a05f4a8b8b2ae83dbc67ac8bc1bd707844234c44b2a6208eab97eeb"
	Sep 14 22:10:19 functional-398000 kubelet[6815]: E0914 22:10:19.249913    6815 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=echoserver-arm pod=hello-node-759d89bdcc-qb8dx_default(b4b51dff-689e-491b-8fc5-59f7380892a7)\"" pod="default/hello-node-759d89bdcc-qb8dx" podUID="b4b51dff-689e-491b-8fc5-59f7380892a7"
	Sep 14 22:10:23 functional-398000 kubelet[6815]: I0914 22:10:23.247004    6815 scope.go:117] "RemoveContainer" containerID="3f8b8523f8855e308b128371d8e43589508c58f3e3ecba54baf6d8210340e4a2"
	Sep 14 22:10:23 functional-398000 kubelet[6815]: E0914 22:10:23.247151    6815 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=echoserver-arm pod=hello-node-connect-7799dfb7c6-d6spp_default(17388bde-57a9-4d82-ac4a-1ca198a0a870)\"" pod="default/hello-node-connect-7799dfb7c6-d6spp" podUID="17388bde-57a9-4d82-ac4a-1ca198a0a870"
	Sep 14 22:10:32 functional-398000 kubelet[6815]: I0914 22:10:32.247711    6815 scope.go:117] "RemoveContainer" containerID="60cd2e6a6a05f4a8b8b2ae83dbc67ac8bc1bd707844234c44b2a6208eab97eeb"
	Sep 14 22:10:32 functional-398000 kubelet[6815]: E0914 22:10:32.248121    6815 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=echoserver-arm pod=hello-node-759d89bdcc-qb8dx_default(b4b51dff-689e-491b-8fc5-59f7380892a7)\"" pod="default/hello-node-759d89bdcc-qb8dx" podUID="b4b51dff-689e-491b-8fc5-59f7380892a7"
	
	* 
	* ==> kubernetes-dashboard [4dc5655ac310] <==
	* 2023/09/14 22:07:37 Using namespace: kubernetes-dashboard
	2023/09/14 22:07:37 Using in-cluster config to connect to apiserver
	2023/09/14 22:07:37 Using secret token for csrf signing
	2023/09/14 22:07:37 Initializing csrf token from kubernetes-dashboard-csrf secret
	2023/09/14 22:07:37 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2023/09/14 22:07:37 Successful initial request to the apiserver, version: v1.28.1
	2023/09/14 22:07:37 Generating JWE encryption key
	2023/09/14 22:07:37 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2023/09/14 22:07:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2023/09/14 22:07:38 Initializing JWE encryption key from synchronized object
	2023/09/14 22:07:38 Creating in-cluster Sidecar client
	2023/09/14 22:07:38 Successful request to sidecar
	2023/09/14 22:07:38 Serving insecurely on HTTP port: 9090
	2023/09/14 22:07:37 Starting overwatch
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p functional-398000 -n functional-398000
helpers_test.go:261: (dbg) Run:  kubectl --context functional-398000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-398000 describe pod busybox-mount
helpers_test.go:282: (dbg) kubectl --context functional-398000 describe pod busybox-mount:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-398000/192.168.105.4
	Start Time:       Thu, 14 Sep 2023 15:07:23 -0700
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.9
	IPs:
	  IP:  10.244.0.9
	Containers:
	  mount-munger:
	    Container ID:  docker://50215e1d1661f67f92d078e825fc08971010c2be6d4541e9de5dddde50640b51
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Thu, 14 Sep 2023 15:07:25 -0700
	      Finished:     Thu, 14 Sep 2023 15:07:25 -0700
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rv7lr (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-rv7lr:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  3m10s  default-scheduler  Successfully assigned default/busybox-mount to functional-398000
	  Normal  Pulling    3m10s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     3m8s   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.652s (1.652s including waiting)
	  Normal  Created    3m8s   kubelet            Created container mount-munger
	  Normal  Started    3m8s   kubelet            Started container mount-munger

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (240.98s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.05s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-717000
image_test.go:105: failed to pass build-args with args: "out/minikube-darwin-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-717000" : 
-- stdout --
	Sending build context to Docker daemon  2.048kB
	Step 1/5 : FROM gcr.io/google-containers/alpine-with-bash:1.0
	 ---> 822c13824dc2
	Step 2/5 : ARG ENV_A
	 ---> [Warning] The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
	 ---> Running in 36e901524740
	Removing intermediate container 36e901524740
	 ---> 975b63f229c6
	Step 3/5 : ARG ENV_B
	 ---> [Warning] The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
	 ---> Running in e2369ae0ff45
	Removing intermediate container e2369ae0ff45
	 ---> f1ab339a13d3
	Step 4/5 : RUN echo "test-build-arg" $ENV_A $ENV_B
	 ---> [Warning] The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
	 ---> Running in ee486c20e98f
	exec /bin/sh: exec format error
	

                                                
                                                
-- /stdout --
** stderr ** 
	DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
	            Install the buildx component to build images with BuildKit:
	            https://docs.docker.com/go/buildx/
	
	The command '/bin/sh -c echo "test-build-arg" $ENV_A $ENV_B' returned a non-zero code: 1

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-717000 -n image-717000
helpers_test.go:244: <<< TestImageBuild/serial/BuildWithBuildArg FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestImageBuild/serial/BuildWithBuildArg]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p image-717000 logs -n 25
helpers_test.go:252: TestImageBuild/serial/BuildWithBuildArg logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|----------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|    Command     |                           Args                           |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|----------------|----------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| image          | functional-398000 image ls                               | functional-398000 | jenkins | v1.31.2 | 14 Sep 23 15:07 PDT | 14 Sep 23 15:07 PDT |
	| image          | functional-398000 image save --daemon                    | functional-398000 | jenkins | v1.31.2 | 14 Sep 23 15:07 PDT | 14 Sep 23 15:07 PDT |
	|                | gcr.io/google-containers/addon-resizer:functional-398000 |                   |         |         |                     |                     |
	|                | --alsologtostderr                                        |                   |         |         |                     |                     |
	| ssh            | functional-398000 ssh sudo cat                           | functional-398000 | jenkins | v1.31.2 | 14 Sep 23 15:07 PDT | 14 Sep 23 15:07 PDT |
	|                | /etc/ssl/certs/1425.pem                                  |                   |         |         |                     |                     |
	| ssh            | functional-398000 ssh sudo cat                           | functional-398000 | jenkins | v1.31.2 | 14 Sep 23 15:07 PDT | 14 Sep 23 15:07 PDT |
	|                | /usr/share/ca-certificates/1425.pem                      |                   |         |         |                     |                     |
	| ssh            | functional-398000 ssh sudo cat                           | functional-398000 | jenkins | v1.31.2 | 14 Sep 23 15:07 PDT | 14 Sep 23 15:07 PDT |
	|                | /etc/ssl/certs/51391683.0                                |                   |         |         |                     |                     |
	| ssh            | functional-398000 ssh sudo cat                           | functional-398000 | jenkins | v1.31.2 | 14 Sep 23 15:07 PDT | 14 Sep 23 15:07 PDT |
	|                | /etc/ssl/certs/14252.pem                                 |                   |         |         |                     |                     |
	| ssh            | functional-398000 ssh sudo cat                           | functional-398000 | jenkins | v1.31.2 | 14 Sep 23 15:07 PDT | 14 Sep 23 15:07 PDT |
	|                | /usr/share/ca-certificates/14252.pem                     |                   |         |         |                     |                     |
	| ssh            | functional-398000 ssh sudo cat                           | functional-398000 | jenkins | v1.31.2 | 14 Sep 23 15:07 PDT | 14 Sep 23 15:07 PDT |
	|                | /etc/ssl/certs/3ec20f2e.0                                |                   |         |         |                     |                     |
	| docker-env     | functional-398000 docker-env                             | functional-398000 | jenkins | v1.31.2 | 14 Sep 23 15:07 PDT | 14 Sep 23 15:07 PDT |
	| docker-env     | functional-398000 docker-env                             | functional-398000 | jenkins | v1.31.2 | 14 Sep 23 15:07 PDT | 14 Sep 23 15:07 PDT |
	| ssh            | functional-398000 ssh sudo cat                           | functional-398000 | jenkins | v1.31.2 | 14 Sep 23 15:07 PDT | 14 Sep 23 15:07 PDT |
	|                | /etc/test/nested/copy/1425/hosts                         |                   |         |         |                     |                     |
	| image          | functional-398000                                        | functional-398000 | jenkins | v1.31.2 | 14 Sep 23 15:07 PDT | 14 Sep 23 15:07 PDT |
	|                | image ls --format short                                  |                   |         |         |                     |                     |
	|                | --alsologtostderr                                        |                   |         |         |                     |                     |
	| image          | functional-398000                                        | functional-398000 | jenkins | v1.31.2 | 14 Sep 23 15:07 PDT | 14 Sep 23 15:07 PDT |
	|                | image ls --format yaml                                   |                   |         |         |                     |                     |
	|                | --alsologtostderr                                        |                   |         |         |                     |                     |
	| image          | functional-398000                                        | functional-398000 | jenkins | v1.31.2 | 14 Sep 23 15:07 PDT | 14 Sep 23 15:07 PDT |
	|                | image ls --format json                                   |                   |         |         |                     |                     |
	|                | --alsologtostderr                                        |                   |         |         |                     |                     |
	| image          | functional-398000                                        | functional-398000 | jenkins | v1.31.2 | 14 Sep 23 15:07 PDT | 14 Sep 23 15:07 PDT |
	|                | image ls --format table                                  |                   |         |         |                     |                     |
	|                | --alsologtostderr                                        |                   |         |         |                     |                     |
	| ssh            | functional-398000 ssh pgrep                              | functional-398000 | jenkins | v1.31.2 | 14 Sep 23 15:07 PDT |                     |
	|                | buildkitd                                                |                   |         |         |                     |                     |
	| image          | functional-398000 image build -t                         | functional-398000 | jenkins | v1.31.2 | 14 Sep 23 15:07 PDT | 14 Sep 23 15:07 PDT |
	|                | localhost/my-image:functional-398000                     |                   |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                         |                   |         |         |                     |                     |
	| image          | functional-398000 image ls                               | functional-398000 | jenkins | v1.31.2 | 14 Sep 23 15:07 PDT | 14 Sep 23 15:07 PDT |
	| update-context | functional-398000                                        | functional-398000 | jenkins | v1.31.2 | 14 Sep 23 15:07 PDT | 14 Sep 23 15:07 PDT |
	|                | update-context                                           |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                   |                   |         |         |                     |                     |
	| update-context | functional-398000                                        | functional-398000 | jenkins | v1.31.2 | 14 Sep 23 15:07 PDT | 14 Sep 23 15:07 PDT |
	|                | update-context                                           |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                   |                   |         |         |                     |                     |
	| update-context | functional-398000                                        | functional-398000 | jenkins | v1.31.2 | 14 Sep 23 15:07 PDT | 14 Sep 23 15:07 PDT |
	|                | update-context                                           |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                   |                   |         |         |                     |                     |
	| delete         | -p functional-398000                                     | functional-398000 | jenkins | v1.31.2 | 14 Sep 23 15:10 PDT | 14 Sep 23 15:10 PDT |
	| start          | -p image-717000 --driver=qemu2                           | image-717000      | jenkins | v1.31.2 | 14 Sep 23 15:10 PDT | 14 Sep 23 15:11 PDT |
	|                |                                                          |                   |         |         |                     |                     |
	| image          | build -t aaa:latest                                      | image-717000      | jenkins | v1.31.2 | 14 Sep 23 15:11 PDT | 14 Sep 23 15:11 PDT |
	|                | ./testdata/image-build/test-normal                       |                   |         |         |                     |                     |
	|                | -p image-717000                                          |                   |         |         |                     |                     |
	| image          | build -t aaa:latest                                      | image-717000      | jenkins | v1.31.2 | 14 Sep 23 15:11 PDT | 14 Sep 23 15:11 PDT |
	|                | --build-opt=build-arg=ENV_A=test_env_str                 |                   |         |         |                     |                     |
	|                | --build-opt=no-cache                                     |                   |         |         |                     |                     |
	|                | ./testdata/image-build/test-arg -p                       |                   |         |         |                     |                     |
	|                | image-717000                                             |                   |         |         |                     |                     |
	|----------------|----------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/14 15:10:34
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.21.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0914 15:10:34.093898    3284 out.go:296] Setting OutFile to fd 1 ...
	I0914 15:10:34.094031    3284 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 15:10:34.094032    3284 out.go:309] Setting ErrFile to fd 2...
	I0914 15:10:34.094034    3284 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 15:10:34.094169    3284 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17243-1006/.minikube/bin
	I0914 15:10:34.095207    3284 out.go:303] Setting JSON to false
	I0914 15:10:34.111213    3284 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2408,"bootTime":1694727026,"procs":404,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0914 15:10:34.111272    3284 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0914 15:10:34.115515    3284 out.go:177] * [image-717000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0914 15:10:34.122531    3284 out.go:177]   - MINIKUBE_LOCATION=17243
	I0914 15:10:34.122545    3284 notify.go:220] Checking for updates...
	I0914 15:10:34.125448    3284 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17243-1006/kubeconfig
	I0914 15:10:34.128445    3284 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 15:10:34.131496    3284 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 15:10:34.132449    3284 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17243-1006/.minikube
	I0914 15:10:34.135459    3284 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 15:10:34.138656    3284 driver.go:373] Setting default libvirt URI to qemu:///system
	I0914 15:10:34.142336    3284 out.go:177] * Using the qemu2 driver based on user configuration
	I0914 15:10:34.148436    3284 start.go:298] selected driver: qemu2
	I0914 15:10:34.148438    3284 start.go:902] validating driver "qemu2" against <nil>
	I0914 15:10:34.148448    3284 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 15:10:34.148528    3284 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0914 15:10:34.151520    3284 out.go:177] * Automatically selected the socket_vmnet network
	I0914 15:10:34.156771    3284 start_flags.go:384] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0914 15:10:34.156855    3284 start_flags.go:904] Wait components to verify : map[apiserver:true system_pods:true]
	I0914 15:10:34.156866    3284 cni.go:84] Creating CNI manager for ""
	I0914 15:10:34.156886    3284 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0914 15:10:34.156889    3284 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0914 15:10:34.156895    3284 start_flags.go:321] config:
	{Name:image-717000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:image-717000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 15:10:34.161331    3284 iso.go:125] acquiring lock: {Name:mkd5b68b252c18146a825ff5365883948b3c6983 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 15:10:34.167413    3284 out.go:177] * Starting control plane node image-717000 in cluster image-717000
	I0914 15:10:34.171489    3284 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0914 15:10:34.171506    3284 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0914 15:10:34.171518    3284 cache.go:57] Caching tarball of preloaded images
	I0914 15:10:34.171588    3284 preload.go:174] Found /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0914 15:10:34.171592    3284 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0914 15:10:34.171826    3284 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/image-717000/config.json ...
	I0914 15:10:34.171838    3284 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/image-717000/config.json: {Name:mk424db5ddd864d3705f1fabe2b2bf7809d5aa58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 15:10:34.172028    3284 start.go:365] acquiring machines lock for image-717000: {Name:mk831f2bd5a9a0c932c8a59972319d68edb53d07 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 15:10:34.172057    3284 start.go:369] acquired machines lock for "image-717000" in 25.375µs
	I0914 15:10:34.172069    3284 start.go:93] Provisioning new machine with config: &{Name:image-717000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.1 ClusterName:image-717000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 15:10:34.172096    3284 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 15:10:34.180462    3284 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0914 15:10:34.202122    3284 start.go:159] libmachine.API.Create for "image-717000" (driver="qemu2")
	I0914 15:10:34.202146    3284 client.go:168] LocalClient.Create starting
	I0914 15:10:34.202206    3284 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/ca.pem
	I0914 15:10:34.202231    3284 main.go:141] libmachine: Decoding PEM data...
	I0914 15:10:34.202244    3284 main.go:141] libmachine: Parsing certificate...
	I0914 15:10:34.202290    3284 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/cert.pem
	I0914 15:10:34.202306    3284 main.go:141] libmachine: Decoding PEM data...
	I0914 15:10:34.202311    3284 main.go:141] libmachine: Parsing certificate...
	I0914 15:10:34.202624    3284 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17243-1006/.minikube/cache/iso/arm64/minikube-v1.31.0-1694625400-17243-arm64.iso...
	I0914 15:10:34.319056    3284 main.go:141] libmachine: Creating SSH key...
	I0914 15:10:34.429175    3284 main.go:141] libmachine: Creating Disk image...
	I0914 15:10:34.429179    3284 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 15:10:34.429311    3284 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/image-717000/disk.qcow2.raw /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/image-717000/disk.qcow2
	I0914 15:10:34.438369    3284 main.go:141] libmachine: STDOUT: 
	I0914 15:10:34.438380    3284 main.go:141] libmachine: STDERR: 
	I0914 15:10:34.438437    3284 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/image-717000/disk.qcow2 +20000M
	I0914 15:10:34.445781    3284 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 15:10:34.445792    3284 main.go:141] libmachine: STDERR: 
	I0914 15:10:34.445808    3284 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/image-717000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/image-717000/disk.qcow2
	I0914 15:10:34.445812    3284 main.go:141] libmachine: Starting QEMU VM...
	I0914 15:10:34.445845    3284 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/image-717000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/image-717000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/image-717000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:f2:f2:dc:10:93 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/image-717000/disk.qcow2
	I0914 15:10:34.480100    3284 main.go:141] libmachine: STDOUT: 
	I0914 15:10:34.480122    3284 main.go:141] libmachine: STDERR: 
	I0914 15:10:34.480125    3284 main.go:141] libmachine: Attempt 0
	I0914 15:10:34.480144    3284 main.go:141] libmachine: Searching for 4a:f2:f2:dc:10:93 in /var/db/dhcpd_leases ...
	I0914 15:10:34.480203    3284 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0914 15:10:34.480219    3284 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:42:d7:28:b6:10:34 ID:1,42:d7:28:b6:10:34 Lease:0x6504d4f6}
	I0914 15:10:34.480227    3284 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:a2:d7:76:1d:83:30 ID:1,a2:d7:76:1d:83:30 Lease:0x65038369}
	I0914 15:10:34.480232    3284 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:fa:ab:b1:c2:6f:25 ID:1,fa:ab:b1:c2:6f:25 Lease:0x6503833e}
	I0914 15:10:36.483009    3284 main.go:141] libmachine: Attempt 1
	I0914 15:10:36.483062    3284 main.go:141] libmachine: Searching for 4a:f2:f2:dc:10:93 in /var/db/dhcpd_leases ...
	I0914 15:10:36.483341    3284 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0914 15:10:36.483386    3284 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:42:d7:28:b6:10:34 ID:1,42:d7:28:b6:10:34 Lease:0x6504d4f6}
	I0914 15:10:36.483414    3284 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:a2:d7:76:1d:83:30 ID:1,a2:d7:76:1d:83:30 Lease:0x65038369}
	I0914 15:10:36.483471    3284 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:fa:ab:b1:c2:6f:25 ID:1,fa:ab:b1:c2:6f:25 Lease:0x6503833e}
	I0914 15:10:38.484420    3284 main.go:141] libmachine: Attempt 2
	I0914 15:10:38.484435    3284 main.go:141] libmachine: Searching for 4a:f2:f2:dc:10:93 in /var/db/dhcpd_leases ...
	I0914 15:10:38.484553    3284 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0914 15:10:38.484563    3284 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:42:d7:28:b6:10:34 ID:1,42:d7:28:b6:10:34 Lease:0x6504d4f6}
	I0914 15:10:38.484576    3284 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:a2:d7:76:1d:83:30 ID:1,a2:d7:76:1d:83:30 Lease:0x65038369}
	I0914 15:10:38.484580    3284 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:fa:ab:b1:c2:6f:25 ID:1,fa:ab:b1:c2:6f:25 Lease:0x6503833e}
	I0914 15:10:40.487088    3284 main.go:141] libmachine: Attempt 3
	I0914 15:10:40.487109    3284 main.go:141] libmachine: Searching for 4a:f2:f2:dc:10:93 in /var/db/dhcpd_leases ...
	I0914 15:10:40.487154    3284 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0914 15:10:40.487159    3284 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:42:d7:28:b6:10:34 ID:1,42:d7:28:b6:10:34 Lease:0x6504d4f6}
	I0914 15:10:40.487167    3284 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:a2:d7:76:1d:83:30 ID:1,a2:d7:76:1d:83:30 Lease:0x65038369}
	I0914 15:10:40.487172    3284 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:fa:ab:b1:c2:6f:25 ID:1,fa:ab:b1:c2:6f:25 Lease:0x6503833e}
	I0914 15:10:42.489618    3284 main.go:141] libmachine: Attempt 4
	I0914 15:10:42.489633    3284 main.go:141] libmachine: Searching for 4a:f2:f2:dc:10:93 in /var/db/dhcpd_leases ...
	I0914 15:10:42.489705    3284 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0914 15:10:42.489716    3284 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:42:d7:28:b6:10:34 ID:1,42:d7:28:b6:10:34 Lease:0x6504d4f6}
	I0914 15:10:42.489721    3284 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:a2:d7:76:1d:83:30 ID:1,a2:d7:76:1d:83:30 Lease:0x65038369}
	I0914 15:10:42.489724    3284 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:fa:ab:b1:c2:6f:25 ID:1,fa:ab:b1:c2:6f:25 Lease:0x6503833e}
	I0914 15:10:44.492152    3284 main.go:141] libmachine: Attempt 5
	I0914 15:10:44.492162    3284 main.go:141] libmachine: Searching for 4a:f2:f2:dc:10:93 in /var/db/dhcpd_leases ...
	I0914 15:10:44.492241    3284 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0914 15:10:44.492249    3284 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:42:d7:28:b6:10:34 ID:1,42:d7:28:b6:10:34 Lease:0x6504d4f6}
	I0914 15:10:44.492253    3284 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:a2:d7:76:1d:83:30 ID:1,a2:d7:76:1d:83:30 Lease:0x65038369}
	I0914 15:10:44.492257    3284 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:fa:ab:b1:c2:6f:25 ID:1,fa:ab:b1:c2:6f:25 Lease:0x6503833e}
	I0914 15:10:46.494635    3284 main.go:141] libmachine: Attempt 6
	I0914 15:10:46.494656    3284 main.go:141] libmachine: Searching for 4a:f2:f2:dc:10:93 in /var/db/dhcpd_leases ...
	I0914 15:10:46.494756    3284 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0914 15:10:46.494774    3284 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:4a:f2:f2:dc:10:93 ID:1,4a:f2:f2:dc:10:93 Lease:0x6504d665}
	I0914 15:10:46.494779    3284 main.go:141] libmachine: Found match: 4a:f2:f2:dc:10:93
	I0914 15:10:46.494790    3284 main.go:141] libmachine: IP: 192.168.105.5
	I0914 15:10:46.494798    3284 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.5)...
	I0914 15:10:47.504913    3284 machine.go:88] provisioning docker machine ...
	I0914 15:10:47.504927    3284 buildroot.go:166] provisioning hostname "image-717000"
	I0914 15:10:47.504972    3284 main.go:141] libmachine: Using SSH client type: native
	I0914 15:10:47.505281    3284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102540760] 0x102542ed0 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0914 15:10:47.505284    3284 main.go:141] libmachine: About to run SSH command:
	sudo hostname image-717000 && echo "image-717000" | sudo tee /etc/hostname
	I0914 15:10:47.572250    3284 main.go:141] libmachine: SSH cmd err, output: <nil>: image-717000
	
	I0914 15:10:47.572316    3284 main.go:141] libmachine: Using SSH client type: native
	I0914 15:10:47.572562    3284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102540760] 0x102542ed0 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0914 15:10:47.572568    3284 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\simage-717000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 image-717000/g' /etc/hosts;
				else 
					echo '127.0.1.1 image-717000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 15:10:47.637865    3284 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 15:10:47.637873    3284 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17243-1006/.minikube CaCertPath:/Users/jenkins/minikube-integration/17243-1006/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17243-1006/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17243-1006/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17243-1006/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17243-1006/.minikube}
	I0914 15:10:47.637880    3284 buildroot.go:174] setting up certificates
	I0914 15:10:47.637884    3284 provision.go:83] configureAuth start
	I0914 15:10:47.637887    3284 provision.go:138] copyHostCerts
	I0914 15:10:47.637955    3284 exec_runner.go:144] found /Users/jenkins/minikube-integration/17243-1006/.minikube/ca.pem, removing ...
	I0914 15:10:47.637965    3284 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17243-1006/.minikube/ca.pem
	I0914 15:10:47.638072    3284 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17243-1006/.minikube/ca.pem (1082 bytes)
	I0914 15:10:47.638271    3284 exec_runner.go:144] found /Users/jenkins/minikube-integration/17243-1006/.minikube/cert.pem, removing ...
	I0914 15:10:47.638273    3284 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17243-1006/.minikube/cert.pem
	I0914 15:10:47.638314    3284 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17243-1006/.minikube/cert.pem (1123 bytes)
	I0914 15:10:47.638409    3284 exec_runner.go:144] found /Users/jenkins/minikube-integration/17243-1006/.minikube/key.pem, removing ...
	I0914 15:10:47.638414    3284 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17243-1006/.minikube/key.pem
	I0914 15:10:47.638451    3284 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17243-1006/.minikube/key.pem (1675 bytes)
	I0914 15:10:47.638529    3284 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17243-1006/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17243-1006/.minikube/certs/ca-key.pem org=jenkins.image-717000 san=[192.168.105.5 192.168.105.5 localhost 127.0.0.1 minikube image-717000]
	I0914 15:10:47.756741    3284 provision.go:172] copyRemoteCerts
	I0914 15:10:47.756766    3284 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 15:10:47.756771    3284 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/image-717000/id_rsa Username:docker}
	I0914 15:10:47.789706    3284 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0914 15:10:47.796752    3284 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0914 15:10:47.803696    3284 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0914 15:10:47.810639    3284 provision.go:86] duration metric: configureAuth took 172.730792ms
	I0914 15:10:47.810645    3284 buildroot.go:189] setting minikube options for container-runtime
	I0914 15:10:47.810753    3284 config.go:182] Loaded profile config "image-717000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0914 15:10:47.810783    3284 main.go:141] libmachine: Using SSH client type: native
	I0914 15:10:47.810997    3284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102540760] 0x102542ed0 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0914 15:10:47.811000    3284 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0914 15:10:47.875409    3284 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0914 15:10:47.875414    3284 buildroot.go:70] root file system type: tmpfs
	I0914 15:10:47.875463    3284 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0914 15:10:47.875518    3284 main.go:141] libmachine: Using SSH client type: native
	I0914 15:10:47.875774    3284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102540760] 0x102542ed0 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0914 15:10:47.875807    3284 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0914 15:10:47.947426    3284 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0914 15:10:47.947473    3284 main.go:141] libmachine: Using SSH client type: native
	I0914 15:10:47.947744    3284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102540760] 0x102542ed0 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0914 15:10:47.947753    3284 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0914 15:10:48.296449    3284 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0914 15:10:48.296458    3284 machine.go:91] provisioned docker machine in 791.448708ms
	I0914 15:10:48.296462    3284 client.go:171] LocalClient.Create took 14.09159075s
	I0914 15:10:48.296477    3284 start.go:167] duration metric: libmachine.API.Create for "image-717000" took 14.091636042s
	I0914 15:10:48.296480    3284 start.go:300] post-start starting for "image-717000" (driver="qemu2")
	I0914 15:10:48.296484    3284 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 15:10:48.296566    3284 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 15:10:48.296577    3284 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/image-717000/id_rsa Username:docker}
	I0914 15:10:48.330445    3284 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 15:10:48.331936    3284 info.go:137] Remote host: Buildroot 2021.02.12
	I0914 15:10:48.331944    3284 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17243-1006/.minikube/addons for local assets ...
	I0914 15:10:48.332010    3284 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17243-1006/.minikube/files for local assets ...
	I0914 15:10:48.332114    3284 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17243-1006/.minikube/files/etc/ssl/certs/14252.pem -> 14252.pem in /etc/ssl/certs
	I0914 15:10:48.332230    3284 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 15:10:48.334775    3284 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17243-1006/.minikube/files/etc/ssl/certs/14252.pem --> /etc/ssl/certs/14252.pem (1708 bytes)
	I0914 15:10:48.341769    3284 start.go:303] post-start completed in 45.280041ms
	I0914 15:10:48.342155    3284 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/image-717000/config.json ...
	I0914 15:10:48.342302    3284 start.go:128] duration metric: createHost completed in 14.167466292s
	I0914 15:10:48.342328    3284 main.go:141] libmachine: Using SSH client type: native
	I0914 15:10:48.342539    3284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102540760] 0x102542ed0 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0914 15:10:48.342542    3284 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0914 15:10:48.406527    3284 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694729448.524907918
	
	I0914 15:10:48.406530    3284 fix.go:206] guest clock: 1694729448.524907918
	I0914 15:10:48.406534    3284 fix.go:219] Guest: 2023-09-14 15:10:48.524907918 -0700 PDT Remote: 2023-09-14 15:10:48.342306 -0700 PDT m=+14.265587626 (delta=182.601918ms)
	I0914 15:10:48.406542    3284 fix.go:190] guest clock delta is within tolerance: 182.601918ms
	I0914 15:10:48.406544    3284 start.go:83] releasing machines lock for "image-717000", held for 14.23174075s
	I0914 15:10:48.406865    3284 ssh_runner.go:195] Run: cat /version.json
	I0914 15:10:48.406875    3284 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/image-717000/id_rsa Username:docker}
	I0914 15:10:48.406881    3284 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 15:10:48.406898    3284 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/image-717000/id_rsa Username:docker}
	I0914 15:10:48.481880    3284 ssh_runner.go:195] Run: systemctl --version
	I0914 15:10:48.484100    3284 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 15:10:48.486109    3284 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 15:10:48.486134    3284 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 15:10:48.491349    3284 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 15:10:48.491354    3284 start.go:469] detecting cgroup driver to use...
	I0914 15:10:48.491426    3284 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 15:10:48.497407    3284 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0914 15:10:48.500306    3284 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0914 15:10:48.503122    3284 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0914 15:10:48.503150    3284 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0914 15:10:48.506460    3284 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0914 15:10:48.509821    3284 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0914 15:10:48.513134    3284 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0914 15:10:48.516086    3284 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 15:10:48.518885    3284 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0914 15:10:48.522263    3284 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 15:10:48.525197    3284 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 15:10:48.527823    3284 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 15:10:48.591127    3284 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0914 15:10:48.599994    3284 start.go:469] detecting cgroup driver to use...
	I0914 15:10:48.600061    3284 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0914 15:10:48.606202    3284 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 15:10:48.610978    3284 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 15:10:48.616913    3284 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 15:10:48.620992    3284 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0914 15:10:48.625603    3284 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0914 15:10:48.689662    3284 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0914 15:10:48.695801    3284 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 15:10:48.701988    3284 ssh_runner.go:195] Run: which cri-dockerd
	I0914 15:10:48.703287    3284 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0914 15:10:48.706510    3284 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0914 15:10:48.711513    3284 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0914 15:10:48.774199    3284 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0914 15:10:48.839744    3284 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0914 15:10:48.839754    3284 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0914 15:10:48.844911    3284 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 15:10:48.906575    3284 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0914 15:10:50.074841    3284 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.168132625s)
	I0914 15:10:50.074909    3284 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0914 15:10:50.141976    3284 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0914 15:10:50.206193    3284 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0914 15:10:50.262278    3284 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 15:10:50.326720    3284 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0914 15:10:50.334262    3284 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 15:10:50.397342    3284 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0914 15:10:50.421502    3284 start.go:516] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0914 15:10:50.421593    3284 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0914 15:10:50.424420    3284 start.go:537] Will wait 60s for crictl version
	I0914 15:10:50.424464    3284 ssh_runner.go:195] Run: which crictl
	I0914 15:10:50.425914    3284 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 15:10:50.441261    3284 start.go:553] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.6
	RuntimeApiVersion:  v1alpha2
	I0914 15:10:50.441325    3284 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0914 15:10:50.450910    3284 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0914 15:10:50.462623    3284 out.go:204] * Preparing Kubernetes v1.28.1 on Docker 24.0.6 ...
	I0914 15:10:50.462760    3284 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0914 15:10:50.464190    3284 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 15:10:50.467624    3284 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0914 15:10:50.467666    3284 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0914 15:10:50.472745    3284 docker.go:636] Got preloaded images: 
	I0914 15:10:50.472749    3284 docker.go:642] registry.k8s.io/kube-apiserver:v1.28.1 wasn't preloaded
	I0914 15:10:50.472791    3284 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0914 15:10:50.475601    3284 ssh_runner.go:195] Run: which lz4
	I0914 15:10:50.476948    3284 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0914 15:10:50.478184    3284 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0914 15:10:50.478194    3284 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (356902558 bytes)
	I0914 15:10:51.790565    3284 docker.go:600] Took 1.313545 seconds to copy over tarball
	I0914 15:10:51.790616    3284 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0914 15:10:52.819529    3284 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.028812333s)
	I0914 15:10:52.819538    3284 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0914 15:10:52.835433    3284 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0914 15:10:52.838928    3284 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0914 15:10:52.843868    3284 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 15:10:52.898655    3284 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0914 15:10:54.370169    3284 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.471392583s)
	I0914 15:10:54.370245    3284 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0914 15:10:54.376241    3284 docker.go:636] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.1
	registry.k8s.io/kube-controller-manager:v1.28.1
	registry.k8s.io/kube-scheduler:v1.28.1
	registry.k8s.io/kube-proxy:v1.28.1
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0914 15:10:54.376249    3284 cache_images.go:84] Images are preloaded, skipping loading
	I0914 15:10:54.376300    3284 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0914 15:10:54.384138    3284 cni.go:84] Creating CNI manager for ""
	I0914 15:10:54.384145    3284 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0914 15:10:54.384154    3284 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0914 15:10:54.384163    3284 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.5 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:image-717000 NodeName:image-717000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 15:10:54.384233    3284 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "image-717000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 15:10:54.384266    3284 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=image-717000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:image-717000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0914 15:10:54.384325    3284 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0914 15:10:54.387365    3284 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 15:10:54.387396    3284 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 15:10:54.390592    3284 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I0914 15:10:54.395642    3284 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 15:10:54.400616    3284 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2098 bytes)
	I0914 15:10:54.405450    3284 ssh_runner.go:195] Run: grep 192.168.105.5	control-plane.minikube.internal$ /etc/hosts
	I0914 15:10:54.406760    3284 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.5	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 15:10:54.410898    3284 certs.go:56] Setting up /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/image-717000 for IP: 192.168.105.5
	I0914 15:10:54.410905    3284 certs.go:190] acquiring lock for shared ca certs: {Name:mkd19d6e2143685b57ba1e0d43c4081bbdb26a79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 15:10:54.411034    3284 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/17243-1006/.minikube/ca.key
	I0914 15:10:54.411071    3284 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/17243-1006/.minikube/proxy-client-ca.key
	I0914 15:10:54.411095    3284 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/image-717000/client.key
	I0914 15:10:54.411103    3284 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/image-717000/client.crt with IP's: []
	I0914 15:10:54.482641    3284 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/image-717000/client.crt ...
	I0914 15:10:54.482644    3284 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/image-717000/client.crt: {Name:mkaa8ca96782e60674dc0f1672bb7b2f0c7055f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 15:10:54.482844    3284 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/image-717000/client.key ...
	I0914 15:10:54.482846    3284 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/image-717000/client.key: {Name:mk7af937cad1d6c4c4d12f5872ffc9e5c06852e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 15:10:54.482958    3284 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/image-717000/apiserver.key.e69b33ca
	I0914 15:10:54.482963    3284 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/image-717000/apiserver.crt.e69b33ca with IP's: [192.168.105.5 10.96.0.1 127.0.0.1 10.0.0.1]
	I0914 15:10:54.547347    3284 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/image-717000/apiserver.crt.e69b33ca ...
	I0914 15:10:54.547350    3284 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/image-717000/apiserver.crt.e69b33ca: {Name:mk01222de6907aceb703f8e790a0621ca23eeccb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 15:10:54.547486    3284 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/image-717000/apiserver.key.e69b33ca ...
	I0914 15:10:54.547488    3284 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/image-717000/apiserver.key.e69b33ca: {Name:mk3f4d5a758ed9cb8fde474cdc62cc9b784851b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 15:10:54.547591    3284 certs.go:337] copying /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/image-717000/apiserver.crt.e69b33ca -> /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/image-717000/apiserver.crt
	I0914 15:10:54.547772    3284 certs.go:341] copying /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/image-717000/apiserver.key.e69b33ca -> /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/image-717000/apiserver.key
	I0914 15:10:54.547900    3284 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/image-717000/proxy-client.key
	I0914 15:10:54.547906    3284 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/image-717000/proxy-client.crt with IP's: []
	I0914 15:10:54.651448    3284 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/image-717000/proxy-client.crt ...
	I0914 15:10:54.651451    3284 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/image-717000/proxy-client.crt: {Name:mk3951f3d6d54a7aea94f179dbc342de761ac334 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 15:10:54.651620    3284 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/image-717000/proxy-client.key ...
	I0914 15:10:54.651621    3284 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/image-717000/proxy-client.key: {Name:mk1eb6241df802ab5b8fbd18f53479397c4ae357 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 15:10:54.651873    3284 certs.go:437] found cert: /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/Users/jenkins/minikube-integration/17243-1006/.minikube/certs/1425.pem (1338 bytes)
	W0914 15:10:54.651898    3284 certs.go:433] ignoring /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/Users/jenkins/minikube-integration/17243-1006/.minikube/certs/1425_empty.pem, impossibly tiny 0 bytes
	I0914 15:10:54.651903    3284 certs.go:437] found cert: /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/Users/jenkins/minikube-integration/17243-1006/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 15:10:54.651921    3284 certs.go:437] found cert: /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/Users/jenkins/minikube-integration/17243-1006/.minikube/certs/ca.pem (1082 bytes)
	I0914 15:10:54.651938    3284 certs.go:437] found cert: /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/Users/jenkins/minikube-integration/17243-1006/.minikube/certs/cert.pem (1123 bytes)
	I0914 15:10:54.651960    3284 certs.go:437] found cert: /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/Users/jenkins/minikube-integration/17243-1006/.minikube/certs/key.pem (1675 bytes)
	I0914 15:10:54.652001    3284 certs.go:437] found cert: /Users/jenkins/minikube-integration/17243-1006/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/17243-1006/.minikube/files/etc/ssl/certs/14252.pem (1708 bytes)
	I0914 15:10:54.652348    3284 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/image-717000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0914 15:10:54.659898    3284 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/image-717000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0914 15:10:54.666984    3284 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/image-717000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 15:10:54.674433    3284 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/image-717000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0914 15:10:54.682367    3284 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17243-1006/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 15:10:54.689512    3284 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17243-1006/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0914 15:10:54.696280    3284 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17243-1006/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 15:10:54.703085    3284 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17243-1006/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0914 15:10:54.710469    3284 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17243-1006/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 15:10:54.717638    3284 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/1425.pem --> /usr/share/ca-certificates/1425.pem (1338 bytes)
	I0914 15:10:54.724357    3284 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17243-1006/.minikube/files/etc/ssl/certs/14252.pem --> /usr/share/ca-certificates/14252.pem (1708 bytes)
	I0914 15:10:54.731032    3284 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 15:10:54.736145    3284 ssh_runner.go:195] Run: openssl version
	I0914 15:10:54.738193    3284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1425.pem && ln -fs /usr/share/ca-certificates/1425.pem /etc/ssl/certs/1425.pem"
	I0914 15:10:54.741560    3284 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1425.pem
	I0914 15:10:54.742962    3284 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 14 22:04 /usr/share/ca-certificates/1425.pem
	I0914 15:10:54.742981    3284 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1425.pem
	I0914 15:10:54.744924    3284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1425.pem /etc/ssl/certs/51391683.0"
	I0914 15:10:54.747903    3284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14252.pem && ln -fs /usr/share/ca-certificates/14252.pem /etc/ssl/certs/14252.pem"
	I0914 15:10:54.751109    3284 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14252.pem
	I0914 15:10:54.752719    3284 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 14 22:04 /usr/share/ca-certificates/14252.pem
	I0914 15:10:54.752739    3284 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14252.pem
	I0914 15:10:54.754499    3284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14252.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 15:10:54.757811    3284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 15:10:54.761055    3284 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 15:10:54.762578    3284 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 14 21:36 /usr/share/ca-certificates/minikubeCA.pem
	I0914 15:10:54.762598    3284 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 15:10:54.764510    3284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 15:10:54.767565    3284 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0914 15:10:54.769081    3284 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0914 15:10:54.769109    3284 kubeadm.go:404] StartCluster: {Name:image-717000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.1 ClusterName:image-717000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 15:10:54.769175    3284 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0914 15:10:54.774754    3284 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 15:10:54.777909    3284 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 15:10:54.780490    3284 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 15:10:54.783344    3284 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 15:10:54.783356    3284 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0914 15:10:54.804686    3284 kubeadm.go:322] [init] Using Kubernetes version: v1.28.1
	I0914 15:10:54.804722    3284 kubeadm.go:322] [preflight] Running pre-flight checks
	I0914 15:10:54.858554    3284 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 15:10:54.858604    3284 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 15:10:54.858651    3284 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0914 15:10:54.916724    3284 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 15:10:54.925885    3284 out.go:204]   - Generating certificates and keys ...
	I0914 15:10:54.925928    3284 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0914 15:10:54.925986    3284 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0914 15:10:54.943014    3284 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0914 15:10:55.054567    3284 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0914 15:10:55.149334    3284 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0914 15:10:55.242614    3284 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0914 15:10:55.386069    3284 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0914 15:10:55.386138    3284 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [image-717000 localhost] and IPs [192.168.105.5 127.0.0.1 ::1]
	I0914 15:10:55.434789    3284 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0914 15:10:55.434852    3284 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [image-717000 localhost] and IPs [192.168.105.5 127.0.0.1 ::1]
	I0914 15:10:55.534363    3284 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0914 15:10:55.613646    3284 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0914 15:10:55.640952    3284 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0914 15:10:55.640986    3284 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 15:10:55.788110    3284 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 15:10:55.891696    3284 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 15:10:55.970227    3284 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 15:10:56.027794    3284 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 15:10:56.028046    3284 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 15:10:56.029058    3284 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 15:10:56.038329    3284 out.go:204]   - Booting up control plane ...
	I0914 15:10:56.038394    3284 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 15:10:56.038457    3284 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 15:10:56.038485    3284 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 15:10:56.038535    3284 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 15:10:56.038575    3284 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 15:10:56.038598    3284 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0914 15:10:56.111728    3284 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0914 15:10:59.613815    3284 kubeadm.go:322] [apiclient] All control plane components are healthy after 3.502043 seconds
	I0914 15:10:59.613870    3284 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0914 15:10:59.619356    3284 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0914 15:11:00.128594    3284 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0914 15:11:00.128700    3284 kubeadm.go:322] [mark-control-plane] Marking the node image-717000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0914 15:11:00.633588    3284 kubeadm.go:322] [bootstrap-token] Using token: 7hid00.os16kto2w6467at7
	I0914 15:11:00.639920    3284 out.go:204]   - Configuring RBAC rules ...
	I0914 15:11:00.639967    3284 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0914 15:11:00.640831    3284 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0914 15:11:00.647819    3284 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0914 15:11:00.649111    3284 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0914 15:11:00.650627    3284 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0914 15:11:00.651839    3284 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0914 15:11:00.655919    3284 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0914 15:11:00.823542    3284 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0914 15:11:01.045682    3284 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0914 15:11:01.046076    3284 kubeadm.go:322] 
	I0914 15:11:01.046108    3284 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0914 15:11:01.046110    3284 kubeadm.go:322] 
	I0914 15:11:01.046146    3284 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0914 15:11:01.046151    3284 kubeadm.go:322] 
	I0914 15:11:01.046171    3284 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0914 15:11:01.046201    3284 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0914 15:11:01.046223    3284 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0914 15:11:01.046225    3284 kubeadm.go:322] 
	I0914 15:11:01.046255    3284 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0914 15:11:01.046256    3284 kubeadm.go:322] 
	I0914 15:11:01.046289    3284 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0914 15:11:01.046292    3284 kubeadm.go:322] 
	I0914 15:11:01.046315    3284 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0914 15:11:01.046349    3284 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0914 15:11:01.046384    3284 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0914 15:11:01.046385    3284 kubeadm.go:322] 
	I0914 15:11:01.046426    3284 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0914 15:11:01.046468    3284 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0914 15:11:01.046470    3284 kubeadm.go:322] 
	I0914 15:11:01.046510    3284 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 7hid00.os16kto2w6467at7 \
	I0914 15:11:01.046565    3284 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:64a56fc8b6798e154cd69d815313de8f6cddb906dc6093ac20ba940ac7c1d871 \
	I0914 15:11:01.046575    3284 kubeadm.go:322] 	--control-plane 
	I0914 15:11:01.046576    3284 kubeadm.go:322] 
	I0914 15:11:01.046625    3284 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0914 15:11:01.046627    3284 kubeadm.go:322] 
	I0914 15:11:01.046673    3284 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 7hid00.os16kto2w6467at7 \
	I0914 15:11:01.046723    3284 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:64a56fc8b6798e154cd69d815313de8f6cddb906dc6093ac20ba940ac7c1d871 
	I0914 15:11:01.046780    3284 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 15:11:01.046788    3284 cni.go:84] Creating CNI manager for ""
	I0914 15:11:01.046795    3284 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0914 15:11:01.054423    3284 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 15:11:01.058428    3284 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 15:11:01.061602    3284 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0914 15:11:01.066177    3284 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 15:11:01.066215    3284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 15:11:01.066234    3284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=d7492f2ae2d9b6e62b385ffcd97ebad62c645e82 minikube.k8s.io/name=image-717000 minikube.k8s.io/updated_at=2023_09_14T15_11_01_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 15:11:01.069892    3284 ops.go:34] apiserver oom_adj: -16
	I0914 15:11:01.121625    3284 kubeadm.go:1081] duration metric: took 55.440041ms to wait for elevateKubeSystemPrivileges.
	I0914 15:11:01.121632    3284 kubeadm.go:406] StartCluster complete in 6.352209167s
	I0914 15:11:01.121645    3284 settings.go:142] acquiring lock: {Name:mkcccc97e247e7e1b2e556ccc64336c05a92af53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 15:11:01.121727    3284 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/17243-1006/kubeconfig
	I0914 15:11:01.122025    3284 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17243-1006/kubeconfig: {Name:mkeec13fc5a79792669e9cedabfbe21efeb27d3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 15:11:01.122185    3284 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0914 15:11:01.122288    3284 config.go:182] Loaded profile config "image-717000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0914 15:11:01.122240    3284 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0914 15:11:01.122316    3284 addons.go:69] Setting storage-provisioner=true in profile "image-717000"
	I0914 15:11:01.122322    3284 addons.go:231] Setting addon storage-provisioner=true in "image-717000"
	I0914 15:11:01.122329    3284 addons.go:69] Setting default-storageclass=true in profile "image-717000"
	I0914 15:11:01.122335    3284 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "image-717000"
	I0914 15:11:01.122339    3284 host.go:66] Checking if "image-717000" exists ...
	I0914 15:11:01.128373    3284 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 15:11:01.132484    3284 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 15:11:01.132488    3284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0914 15:11:01.132497    3284 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/image-717000/id_rsa Username:docker}
	I0914 15:11:01.136248    3284 addons.go:231] Setting addon default-storageclass=true in "image-717000"
	I0914 15:11:01.136263    3284 host.go:66] Checking if "image-717000" exists ...
	I0914 15:11:01.136903    3284 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 15:11:01.136908    3284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 15:11:01.136914    3284 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/image-717000/id_rsa Username:docker}
	I0914 15:11:01.140034    3284 kapi.go:248] "coredns" deployment in "kube-system" namespace and "image-717000" context rescaled to 1 replicas
	I0914 15:11:01.140045    3284 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 15:11:01.144386    3284 out.go:177] * Verifying Kubernetes components...
	I0914 15:11:01.152406    3284 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 15:11:01.178621    3284 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0914 15:11:01.178975    3284 api_server.go:52] waiting for apiserver process to appear ...
	I0914 15:11:01.179006    3284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 15:11:01.180155    3284 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 15:11:01.184310    3284 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 15:11:01.580719    3284 api_server.go:72] duration metric: took 440.646ms to wait for apiserver process to appear ...
	I0914 15:11:01.580726    3284 api_server.go:88] waiting for apiserver healthz status ...
	I0914 15:11:01.580733    3284 api_server.go:253] Checking apiserver healthz at https://192.168.105.5:8443/healthz ...
	I0914 15:11:01.580805    3284 start.go:917] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I0914 15:11:01.585349    3284 api_server.go:279] https://192.168.105.5:8443/healthz returned 200:
	ok
	I0914 15:11:01.586385    3284 api_server.go:141] control plane version: v1.28.1
	I0914 15:11:01.586390    3284 api_server.go:131] duration metric: took 5.662209ms to wait for apiserver health ...
	I0914 15:11:01.586395    3284 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 15:11:01.589936    3284 system_pods.go:59] 4 kube-system pods found
	I0914 15:11:01.589943    3284 system_pods.go:61] "etcd-image-717000" [6836f775-846b-4d42-bfd3-e885ab1433b8] Pending
	I0914 15:11:01.589945    3284 system_pods.go:61] "kube-apiserver-image-717000" [2ea7eddf-fc09-4178-9c3b-7dd6fa66595e] Pending
	I0914 15:11:01.589947    3284 system_pods.go:61] "kube-controller-manager-image-717000" [a42d6b4d-dc72-4baf-8cc6-4200048739d9] Pending
	I0914 15:11:01.589948    3284 system_pods.go:61] "kube-scheduler-image-717000" [d2036e19-007e-4852-a37c-f1822e2c169c] Pending
	I0914 15:11:01.589951    3284 system_pods.go:74] duration metric: took 3.553833ms to wait for pod list to return data ...
	I0914 15:11:01.589954    3284 kubeadm.go:581] duration metric: took 449.88475ms to wait for : map[apiserver:true system_pods:true] ...
	I0914 15:11:01.589960    3284 node_conditions.go:102] verifying NodePressure condition ...
	I0914 15:11:01.591277    3284 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0914 15:11:01.591283    3284 node_conditions.go:123] node cpu capacity is 2
	I0914 15:11:01.591288    3284 node_conditions.go:105] duration metric: took 1.326792ms to run NodePressure ...
	I0914 15:11:01.591292    3284 start.go:228] waiting for startup goroutines ...
	I0914 15:11:01.679967    3284 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0914 15:11:01.687891    3284 addons.go:502] enable addons completed in 565.661083ms: enabled=[default-storageclass storage-provisioner]
	I0914 15:11:01.687906    3284 start.go:233] waiting for cluster config update ...
	I0914 15:11:01.687910    3284 start.go:242] writing updated cluster config ...
	I0914 15:11:01.688309    3284 ssh_runner.go:195] Run: rm -f paused
	I0914 15:11:01.715916    3284 start.go:600] kubectl: 1.27.2, cluster: 1.28.1 (minor skew: 1)
	I0914 15:11:01.719955    3284 out.go:177] * Done! kubectl is now configured to use "image-717000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Thu 2023-09-14 22:10:45 UTC, ends at Thu 2023-09-14 22:11:03 UTC. --
	Sep 14 22:10:57 image-717000 dockerd[1100]: time="2023-09-14T22:10:57.293318089Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 14 22:10:57 image-717000 dockerd[1100]: time="2023-09-14T22:10:57.293385922Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 14 22:10:57 image-717000 dockerd[1100]: time="2023-09-14T22:10:57.293397131Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 14 22:10:57 image-717000 dockerd[1100]: time="2023-09-14T22:10:57.293405756Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 14 22:10:57 image-717000 cri-dockerd[993]: time="2023-09-14T22:10:57Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c88836e92ed7638b45a0f2550f1b7201d5f97bfd0f575c77e8ac83079f1e17f8/resolv.conf as [nameserver 192.168.105.1]"
	Sep 14 22:10:57 image-717000 cri-dockerd[993]: time="2023-09-14T22:10:57Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c44ce0ed841880a8e8569278dfb9c84fcf9fa9749339e77fb2aaf2669147cb74/resolv.conf as [nameserver 192.168.105.1]"
	Sep 14 22:10:57 image-717000 dockerd[1100]: time="2023-09-14T22:10:57.347761172Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 14 22:10:57 image-717000 dockerd[1100]: time="2023-09-14T22:10:57.348705797Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 14 22:10:57 image-717000 dockerd[1100]: time="2023-09-14T22:10:57.348731464Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 14 22:10:57 image-717000 dockerd[1100]: time="2023-09-14T22:10:57.348795381Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 14 22:10:57 image-717000 dockerd[1100]: time="2023-09-14T22:10:57.351909589Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 14 22:10:57 image-717000 dockerd[1100]: time="2023-09-14T22:10:57.351984131Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 14 22:10:57 image-717000 dockerd[1100]: time="2023-09-14T22:10:57.352014297Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 14 22:10:57 image-717000 dockerd[1100]: time="2023-09-14T22:10:57.352037131Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 14 22:11:03 image-717000 dockerd[1094]: time="2023-09-14T22:11:03.341699133Z" level=info msg="Layer sha256:5e5d01bb2a8d3e34816f24ff1a055b5d084e5a5a1919cd77684120916d61c3eb cleaned up"
	Sep 14 22:11:03 image-717000 dockerd[1094]: time="2023-09-14T22:11:03.469125759Z" level=info msg="Layer sha256:5e5d01bb2a8d3e34816f24ff1a055b5d084e5a5a1919cd77684120916d61c3eb cleaned up"
	Sep 14 22:11:03 image-717000 dockerd[1094]: time="2023-09-14T22:11:03.490585467Z" level=info msg="Layer sha256:5e5d01bb2a8d3e34816f24ff1a055b5d084e5a5a1919cd77684120916d61c3eb cleaned up"
	Sep 14 22:11:03 image-717000 dockerd[1100]: time="2023-09-14T22:11:03.520899467Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 14 22:11:03 image-717000 dockerd[1100]: time="2023-09-14T22:11:03.520929884Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 14 22:11:03 image-717000 dockerd[1100]: time="2023-09-14T22:11:03.520941050Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 14 22:11:03 image-717000 dockerd[1100]: time="2023-09-14T22:11:03.520962717Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 14 22:11:03 image-717000 dockerd[1094]: time="2023-09-14T22:11:03.659380550Z" level=info msg="ignoring event" container=ee486c20e98f0be0e28334291550cbdc56b11bb5419c83064255781723a8c557 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 14 22:11:03 image-717000 dockerd[1100]: time="2023-09-14T22:11:03.659587842Z" level=info msg="shim disconnected" id=ee486c20e98f0be0e28334291550cbdc56b11bb5419c83064255781723a8c557 namespace=moby
	Sep 14 22:11:03 image-717000 dockerd[1100]: time="2023-09-14T22:11:03.659672884Z" level=warning msg="cleaning up after shim disconnected" id=ee486c20e98f0be0e28334291550cbdc56b11bb5419c83064255781723a8c557 namespace=moby
	Sep 14 22:11:03 image-717000 dockerd[1100]: time="2023-09-14T22:11:03.659693050Z" level=info msg="cleaning up dead shim" namespace=moby
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	f342986cd32a7       9cdd6470f48c8       7 seconds ago       Running             etcd                      0                   c44ce0ed84188
	ae521757e2a9e       8b6e1980b7584       7 seconds ago       Running             kube-controller-manager   0                   c88836e92ed76
	9d6406a5b0499       b4a5a57e99492       7 seconds ago       Running             kube-scheduler            0                   5b2ffdbb9fd97
	a816c278e378d       b29fb62480892       7 seconds ago       Running             kube-apiserver            0                   17ecf98911a74
	
	* 
	* ==> describe nodes <==
	* Name:               image-717000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=image-717000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d7492f2ae2d9b6e62b385ffcd97ebad62c645e82
	                    minikube.k8s.io/name=image-717000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_14T15_11_01_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 14 Sep 2023 22:10:58 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  image-717000
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 14 Sep 2023 22:11:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 14 Sep 2023 22:11:01 +0000   Thu, 14 Sep 2023 22:10:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 14 Sep 2023 22:11:01 +0000   Thu, 14 Sep 2023 22:10:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 14 Sep 2023 22:11:01 +0000   Thu, 14 Sep 2023 22:10:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Thu, 14 Sep 2023 22:11:01 +0000   Thu, 14 Sep 2023 22:10:58 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
	Addresses:
	  InternalIP:  192.168.105.5
	  Hostname:    image-717000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	System Info:
	  Machine ID:                 96891539b5074aaeb17efc7008c772d9
	  System UUID:                96891539b5074aaeb17efc7008c772d9
	  Boot ID:                    1d6451c4-198b-46f6-ab90-30d971165d95
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.6
	  Kubelet Version:            v1.28.1
	  Kube-Proxy Version:         v1.28.1
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-image-717000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         5s
	  kube-system                 kube-apiserver-image-717000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3s
	  kube-system                 kube-controller-manager-image-717000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3s
	  kube-system                 kube-scheduler-image-717000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             100Mi (2%!)(MISSING)  0 (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From     Message
	  ----    ------                   ----  ----     -------
	  Normal  Starting                 4s    kubelet  Starting kubelet.
	  Normal  NodeAllocatableEnforced  3s    kubelet  Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  3s    kubelet  Node image-717000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3s    kubelet  Node image-717000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3s    kubelet  Node image-717000 status is now: NodeHasSufficientPID
	
	* 
	* ==> dmesg <==
	* [Sep14 22:10] ACPI: SRAT not present
	[  +0.000000] KASLR disabled due to lack of seed
	[  +0.650670] EINJ: EINJ table not found.
	[  +0.526529] systemd-fstab-generator[117]: Ignoring "noauto" for root device
	[  +0.043610] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000799] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +3.118735] systemd-fstab-generator[478]: Ignoring "noauto" for root device
	[  +0.067983] systemd-fstab-generator[489]: Ignoring "noauto" for root device
	[  +0.446125] systemd-fstab-generator[667]: Ignoring "noauto" for root device
	[  +0.182553] systemd-fstab-generator[702]: Ignoring "noauto" for root device
	[  +0.066016] systemd-fstab-generator[713]: Ignoring "noauto" for root device
	[  +0.065117] systemd-fstab-generator[726]: Ignoring "noauto" for root device
	[  +1.152231] kauditd_printk_skb: 17 callbacks suppressed
	[  +0.080790] systemd-fstab-generator[912]: Ignoring "noauto" for root device
	[  +0.069214] systemd-fstab-generator[924]: Ignoring "noauto" for root device
	[  +0.055927] systemd-fstab-generator[935]: Ignoring "noauto" for root device
	[  +0.064723] systemd-fstab-generator[946]: Ignoring "noauto" for root device
	[  +0.063167] systemd-fstab-generator[971]: Ignoring "noauto" for root device
	[  +2.508286] systemd-fstab-generator[1087]: Ignoring "noauto" for root device
	[  +3.208353] systemd-fstab-generator[1417]: Ignoring "noauto" for root device
	[  +0.355058] kauditd_printk_skb: 68 callbacks suppressed
	[Sep14 22:11] systemd-fstab-generator[2269]: Ignoring "noauto" for root device
	[  +2.700238] kauditd_printk_skb: 2 callbacks suppressed
	
	* 
	* ==> etcd [f342986cd32a] <==
	* {"level":"info","ts":"2023-09-14T22:10:57.567507Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 switched to configuration voters=(6403572207504089856)"}
	{"level":"info","ts":"2023-09-14T22:10:57.567567Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"cd5c0afff2184bea","local-member-id":"58de0efec1d86300","added-peer-id":"58de0efec1d86300","added-peer-peer-urls":["https://192.168.105.5:2380"]}
	{"level":"info","ts":"2023-09-14T22:10:57.567699Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-09-14T22:10:57.56776Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.105.5:2380"}
	{"level":"info","ts":"2023-09-14T22:10:57.567779Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.105.5:2380"}
	{"level":"info","ts":"2023-09-14T22:10:57.567977Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"58de0efec1d86300","initial-advertise-peer-urls":["https://192.168.105.5:2380"],"listen-peer-urls":["https://192.168.105.5:2380"],"advertise-client-urls":["https://192.168.105.5:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.105.5:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-09-14T22:10:57.570364Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-09-14T22:10:57.649299Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 is starting a new election at term 1"}
	{"level":"info","ts":"2023-09-14T22:10:57.649382Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-09-14T22:10:57.649406Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 received MsgPreVoteResp from 58de0efec1d86300 at term 1"}
	{"level":"info","ts":"2023-09-14T22:10:57.649443Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 became candidate at term 2"}
	{"level":"info","ts":"2023-09-14T22:10:57.649459Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 received MsgVoteResp from 58de0efec1d86300 at term 2"}
	{"level":"info","ts":"2023-09-14T22:10:57.649478Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 became leader at term 2"}
	{"level":"info","ts":"2023-09-14T22:10:57.649512Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 58de0efec1d86300 elected leader 58de0efec1d86300 at term 2"}
	{"level":"info","ts":"2023-09-14T22:10:57.650615Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-14T22:10:57.650804Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-14T22:10:57.651277Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.5:2379"}
	{"level":"info","ts":"2023-09-14T22:10:57.654378Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-14T22:10:57.654402Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-09-14T22:10:57.650742Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"58de0efec1d86300","local-member-attributes":"{Name:image-717000 ClientURLs:[https://192.168.105.5:2379]}","request-path":"/0/members/58de0efec1d86300/attributes","cluster-id":"cd5c0afff2184bea","publish-timeout":"7s"}
	{"level":"info","ts":"2023-09-14T22:10:57.650749Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-14T22:10:57.655006Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-09-14T22:10:57.663232Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"cd5c0afff2184bea","local-member-id":"58de0efec1d86300","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-14T22:10:57.663278Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-14T22:10:57.663302Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	
	* 
	* ==> kernel <==
	*  22:11:04 up 0 min,  0 users,  load average: 0.41, 0.10, 0.03
	Linux image-717000 5.10.57 #1 SMP PREEMPT Wed Sep 13 19:05:16 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [a816c278e378] <==
	* I0914 22:10:58.561341       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0914 22:10:58.561533       1 controller.go:624] quota admission added evaluator for: namespaces
	I0914 22:10:58.562740       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0914 22:10:58.562749       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0914 22:10:58.567084       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0914 22:10:58.567499       1 aggregator.go:166] initial CRD sync complete...
	I0914 22:10:58.567513       1 autoregister_controller.go:141] Starting autoregister controller
	I0914 22:10:58.567521       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0914 22:10:58.567528       1 cache.go:39] Caches are synced for autoregister controller
	I0914 22:10:58.569173       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0914 22:10:58.578401       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0914 22:10:58.579992       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0914 22:10:59.468799       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0914 22:10:59.470185       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0914 22:10:59.470191       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0914 22:10:59.613272       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0914 22:10:59.623599       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0914 22:10:59.685569       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0914 22:10:59.688274       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.105.5]
	I0914 22:10:59.688768       1 controller.go:624] quota admission added evaluator for: endpoints
	I0914 22:10:59.690061       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0914 22:11:00.518262       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0914 22:11:00.935390       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0914 22:11:00.940657       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0914 22:11:00.944349       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	
	* 
	* ==> kube-controller-manager [ae521757e2a9] <==
	* I0914 22:11:00.543398       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0914 22:11:00.543842       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kube-apiserver-client"
	I0914 22:11:00.543867       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0914 22:11:00.543888       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0914 22:11:00.543970       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0914 22:11:00.544045       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-legacy-unknown"
	I0914 22:11:00.544389       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0914 22:11:00.544212       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	E0914 22:11:00.546639       1 core.go:213] "Failed to start cloud node lifecycle controller" err="no cloud provider provided"
	I0914 22:11:00.546649       1 controllermanager.go:620] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0914 22:11:00.549986       1 controllermanager.go:642] "Started controller" controller="replicationcontroller-controller"
	I0914 22:11:00.550083       1 replica_set.go:214] "Starting controller" name="replicationcontroller"
	I0914 22:11:00.550113       1 shared_informer.go:311] Waiting for caches to sync for ReplicationController
	I0914 22:11:00.552426       1 controllermanager.go:642] "Started controller" controller="endpoints-controller"
	I0914 22:11:00.552505       1 endpoints_controller.go:174] "Starting endpoint controller"
	I0914 22:11:00.552543       1 shared_informer.go:311] Waiting for caches to sync for endpoint
	I0914 22:11:00.615433       1 shared_informer.go:318] Caches are synced for tokens
	I0914 22:11:00.670675       1 controllermanager.go:642] "Started controller" controller="namespace-controller"
	I0914 22:11:00.670695       1 namespace_controller.go:197] "Starting namespace controller"
	I0914 22:11:00.670698       1 shared_informer.go:311] Waiting for caches to sync for namespace
	I0914 22:11:00.818258       1 controllermanager.go:642] "Started controller" controller="statefulset-controller"
	I0914 22:11:00.818332       1 stateful_set.go:161] "Starting stateful set controller"
	I0914 22:11:00.818360       1 shared_informer.go:311] Waiting for caches to sync for stateful set
	I0914 22:11:00.867347       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0914 22:11:00.867379       1 cleaner.go:83] "Starting CSR cleaner controller"
	
	* 
	* ==> kube-scheduler [9d6406a5b049] <==
	* W0914 22:10:58.534173       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0914 22:10:58.534193       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0914 22:10:58.534224       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0914 22:10:58.534254       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0914 22:10:58.535536       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0914 22:10:58.535576       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0914 22:10:58.535685       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0914 22:10:58.535724       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0914 22:10:58.535765       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0914 22:10:58.535794       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0914 22:10:58.535826       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0914 22:10:58.535843       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0914 22:10:58.535895       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0914 22:10:58.535914       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0914 22:10:58.535964       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0914 22:10:58.535983       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0914 22:10:58.536009       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0914 22:10:58.536038       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0914 22:10:58.536067       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0914 22:10:58.536085       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0914 22:10:58.536126       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0914 22:10:58.536145       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0914 22:10:59.527450       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0914 22:10:59.527473       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0914 22:10:59.729721       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Thu 2023-09-14 22:10:45 UTC, ends at Thu 2023-09-14 22:11:04 UTC. --
	Sep 14 22:11:01 image-717000 kubelet[2288]: I0914 22:11:01.100797    2288 topology_manager.go:215] "Topology Admit Handler" podUID="ccb005c555fcc6d5e192a66db807b287" podNamespace="kube-system" podName="etcd-image-717000"
	Sep 14 22:11:01 image-717000 kubelet[2288]: I0914 22:11:01.100861    2288 topology_manager.go:215] "Topology Admit Handler" podUID="8c2067fc81ba748a8e1e195c37969ce9" podNamespace="kube-system" podName="kube-apiserver-image-717000"
	Sep 14 22:11:01 image-717000 kubelet[2288]: I0914 22:11:01.100882    2288 topology_manager.go:215] "Topology Admit Handler" podUID="6879ab71402d591897bc50f9e148c54b" podNamespace="kube-system" podName="kube-controller-manager-image-717000"
	Sep 14 22:11:01 image-717000 kubelet[2288]: I0914 22:11:01.100895    2288 topology_manager.go:215] "Topology Admit Handler" podUID="d0aa4f53d9441c397a63236e53d83bd2" podNamespace="kube-system" podName="kube-scheduler-image-717000"
	Sep 14 22:11:01 image-717000 kubelet[2288]: E0914 22:11:01.106597    2288 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"etcd-image-717000\" already exists" pod="kube-system/etcd-image-717000"
	Sep 14 22:11:01 image-717000 kubelet[2288]: I0914 22:11:01.185409    2288 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/ccb005c555fcc6d5e192a66db807b287-etcd-certs\") pod \"etcd-image-717000\" (UID: \"ccb005c555fcc6d5e192a66db807b287\") " pod="kube-system/etcd-image-717000"
	Sep 14 22:11:01 image-717000 kubelet[2288]: I0914 22:11:01.185431    2288 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/ccb005c555fcc6d5e192a66db807b287-etcd-data\") pod \"etcd-image-717000\" (UID: \"ccb005c555fcc6d5e192a66db807b287\") " pod="kube-system/etcd-image-717000"
	Sep 14 22:11:01 image-717000 kubelet[2288]: I0914 22:11:01.185441    2288 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8c2067fc81ba748a8e1e195c37969ce9-k8s-certs\") pod \"kube-apiserver-image-717000\" (UID: \"8c2067fc81ba748a8e1e195c37969ce9\") " pod="kube-system/kube-apiserver-image-717000"
	Sep 14 22:11:01 image-717000 kubelet[2288]: I0914 22:11:01.185461    2288 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8c2067fc81ba748a8e1e195c37969ce9-usr-share-ca-certificates\") pod \"kube-apiserver-image-717000\" (UID: \"8c2067fc81ba748a8e1e195c37969ce9\") " pod="kube-system/kube-apiserver-image-717000"
	Sep 14 22:11:01 image-717000 kubelet[2288]: I0914 22:11:01.185470    2288 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8c2067fc81ba748a8e1e195c37969ce9-ca-certs\") pod \"kube-apiserver-image-717000\" (UID: \"8c2067fc81ba748a8e1e195c37969ce9\") " pod="kube-system/kube-apiserver-image-717000"
	Sep 14 22:11:01 image-717000 kubelet[2288]: I0914 22:11:01.185479    2288 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6879ab71402d591897bc50f9e148c54b-ca-certs\") pod \"kube-controller-manager-image-717000\" (UID: \"6879ab71402d591897bc50f9e148c54b\") " pod="kube-system/kube-controller-manager-image-717000"
	Sep 14 22:11:01 image-717000 kubelet[2288]: I0914 22:11:01.185487    2288 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6879ab71402d591897bc50f9e148c54b-flexvolume-dir\") pod \"kube-controller-manager-image-717000\" (UID: \"6879ab71402d591897bc50f9e148c54b\") " pod="kube-system/kube-controller-manager-image-717000"
	Sep 14 22:11:01 image-717000 kubelet[2288]: I0914 22:11:01.185497    2288 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6879ab71402d591897bc50f9e148c54b-k8s-certs\") pod \"kube-controller-manager-image-717000\" (UID: \"6879ab71402d591897bc50f9e148c54b\") " pod="kube-system/kube-controller-manager-image-717000"
	Sep 14 22:11:01 image-717000 kubelet[2288]: I0914 22:11:01.185510    2288 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6879ab71402d591897bc50f9e148c54b-kubeconfig\") pod \"kube-controller-manager-image-717000\" (UID: \"6879ab71402d591897bc50f9e148c54b\") " pod="kube-system/kube-controller-manager-image-717000"
	Sep 14 22:11:01 image-717000 kubelet[2288]: I0914 22:11:01.185521    2288 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6879ab71402d591897bc50f9e148c54b-usr-share-ca-certificates\") pod \"kube-controller-manager-image-717000\" (UID: \"6879ab71402d591897bc50f9e148c54b\") " pod="kube-system/kube-controller-manager-image-717000"
	Sep 14 22:11:01 image-717000 kubelet[2288]: I0914 22:11:01.185539    2288 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d0aa4f53d9441c397a63236e53d83bd2-kubeconfig\") pod \"kube-scheduler-image-717000\" (UID: \"d0aa4f53d9441c397a63236e53d83bd2\") " pod="kube-system/kube-scheduler-image-717000"
	Sep 14 22:11:01 image-717000 kubelet[2288]: I0914 22:11:01.967335    2288 apiserver.go:52] "Watching apiserver"
	Sep 14 22:11:01 image-717000 kubelet[2288]: I0914 22:11:01.985438    2288 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Sep 14 22:11:02 image-717000 kubelet[2288]: E0914 22:11:02.042356    2288 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-scheduler-image-717000\" already exists" pod="kube-system/kube-scheduler-image-717000"
	Sep 14 22:11:02 image-717000 kubelet[2288]: E0914 22:11:02.043132    2288 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"etcd-image-717000\" already exists" pod="kube-system/etcd-image-717000"
	Sep 14 22:11:02 image-717000 kubelet[2288]: E0914 22:11:02.043336    2288 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-image-717000\" already exists" pod="kube-system/kube-controller-manager-image-717000"
	Sep 14 22:11:02 image-717000 kubelet[2288]: I0914 22:11:02.051645    2288 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-image-717000" podStartSLOduration=1.05161705 podCreationTimestamp="2023-09-14 22:11:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-09-14 22:11:02.0473843 +0000 UTC m=+1.126762585" watchObservedRunningTime="2023-09-14 22:11:02.05161705 +0000 UTC m=+1.130995335"
	Sep 14 22:11:02 image-717000 kubelet[2288]: I0914 22:11:02.055345    2288 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-image-717000" podStartSLOduration=1.0553263 podCreationTimestamp="2023-09-14 22:11:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-09-14 22:11:02.051564466 +0000 UTC m=+1.130942752" watchObservedRunningTime="2023-09-14 22:11:02.0553263 +0000 UTC m=+1.134704544"
	Sep 14 22:11:02 image-717000 kubelet[2288]: I0914 22:11:02.059318    2288 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/etcd-image-717000" podStartSLOduration=3.059300466 podCreationTimestamp="2023-09-14 22:10:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-09-14 22:11:02.055428258 +0000 UTC m=+1.134806544" watchObservedRunningTime="2023-09-14 22:11:02.059300466 +0000 UTC m=+1.138678752"
	Sep 14 22:11:02 image-717000 kubelet[2288]: I0914 22:11:02.063351    2288 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-image-717000" podStartSLOduration=1.063337258 podCreationTimestamp="2023-09-14 22:11:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-09-14 22:11:02.059440675 +0000 UTC m=+1.138818960" watchObservedRunningTime="2023-09-14 22:11:02.063337258 +0000 UTC m=+1.142715544"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p image-717000 -n image-717000
helpers_test.go:261: (dbg) Run:  kubectl --context image-717000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: storage-provisioner
helpers_test.go:274: ======> post-mortem[TestImageBuild/serial/BuildWithBuildArg]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context image-717000 describe pod storage-provisioner
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context image-717000 describe pod storage-provisioner: exit status 1 (37.007792ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context image-717000 describe pod storage-provisioner: exit status 1
--- FAIL: TestImageBuild/serial/BuildWithBuildArg (1.05s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (56.32s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-438000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:183: (dbg) Done: kubectl --context ingress-addon-legacy-438000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (11.219725167s)
addons_test.go:208: (dbg) Run:  kubectl --context ingress-addon-legacy-438000 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context ingress-addon-legacy-438000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [cb70a45e-10c0-4b18-a655-cc544febbbfd] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [cb70a45e-10c0-4b18-a655-cc544febbbfd] Running
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 11.013631208s
addons_test.go:238: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-438000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Run:  kubectl --context ingress-addon-legacy-438000 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-438000 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.105.6
E0914 15:12:54.722346    1425 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/functional-398000/client.crt: no such file or directory
addons_test.go:273: (dbg) Non-zero exit: nslookup hello-john.test 192.168.105.6: exit status 1 (15.038555292s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	

                                                
                                                
-- /stdout --
addons_test.go:275: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.105.6" : exit status 1
addons_test.go:279: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-438000 addons disable ingress-dns --alsologtostderr -v=1
E0914 15:13:07.457598    1425 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/client.crt: no such file or directory
addons_test.go:282: (dbg) Done: out/minikube-darwin-arm64 -p ingress-addon-legacy-438000 addons disable ingress-dns --alsologtostderr -v=1: (10.686155166s)
addons_test.go:287: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-438000 addons disable ingress --alsologtostderr -v=1
addons_test.go:287: (dbg) Done: out/minikube-darwin-arm64 -p ingress-addon-legacy-438000 addons disable ingress --alsologtostderr -v=1: (7.110904834s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ingress-addon-legacy-438000 -n ingress-addon-legacy-438000
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-438000 logs -n 25
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                   Args                   |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| ssh            | functional-398000 ssh sudo cat           | functional-398000           | jenkins | v1.31.2 | 14 Sep 23 15:07 PDT | 14 Sep 23 15:07 PDT |
	|                | /etc/test/nested/copy/1425/hosts         |                             |         |         |                     |                     |
	| image          | functional-398000                        | functional-398000           | jenkins | v1.31.2 | 14 Sep 23 15:07 PDT | 14 Sep 23 15:07 PDT |
	|                | image ls --format short                  |                             |         |         |                     |                     |
	|                | --alsologtostderr                        |                             |         |         |                     |                     |
	| image          | functional-398000                        | functional-398000           | jenkins | v1.31.2 | 14 Sep 23 15:07 PDT | 14 Sep 23 15:07 PDT |
	|                | image ls --format yaml                   |                             |         |         |                     |                     |
	|                | --alsologtostderr                        |                             |         |         |                     |                     |
	| image          | functional-398000                        | functional-398000           | jenkins | v1.31.2 | 14 Sep 23 15:07 PDT | 14 Sep 23 15:07 PDT |
	|                | image ls --format json                   |                             |         |         |                     |                     |
	|                | --alsologtostderr                        |                             |         |         |                     |                     |
	| image          | functional-398000                        | functional-398000           | jenkins | v1.31.2 | 14 Sep 23 15:07 PDT | 14 Sep 23 15:07 PDT |
	|                | image ls --format table                  |                             |         |         |                     |                     |
	|                | --alsologtostderr                        |                             |         |         |                     |                     |
	| ssh            | functional-398000 ssh pgrep              | functional-398000           | jenkins | v1.31.2 | 14 Sep 23 15:07 PDT |                     |
	|                | buildkitd                                |                             |         |         |                     |                     |
	| image          | functional-398000 image build -t         | functional-398000           | jenkins | v1.31.2 | 14 Sep 23 15:07 PDT | 14 Sep 23 15:07 PDT |
	|                | localhost/my-image:functional-398000     |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr         |                             |         |         |                     |                     |
	| image          | functional-398000 image ls               | functional-398000           | jenkins | v1.31.2 | 14 Sep 23 15:07 PDT | 14 Sep 23 15:07 PDT |
	| update-context | functional-398000                        | functional-398000           | jenkins | v1.31.2 | 14 Sep 23 15:07 PDT | 14 Sep 23 15:07 PDT |
	|                | update-context                           |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                   |                             |         |         |                     |                     |
	| update-context | functional-398000                        | functional-398000           | jenkins | v1.31.2 | 14 Sep 23 15:07 PDT | 14 Sep 23 15:07 PDT |
	|                | update-context                           |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                   |                             |         |         |                     |                     |
	| update-context | functional-398000                        | functional-398000           | jenkins | v1.31.2 | 14 Sep 23 15:07 PDT | 14 Sep 23 15:07 PDT |
	|                | update-context                           |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                   |                             |         |         |                     |                     |
	| delete         | -p functional-398000                     | functional-398000           | jenkins | v1.31.2 | 14 Sep 23 15:10 PDT | 14 Sep 23 15:10 PDT |
	| start          | -p image-717000 --driver=qemu2           | image-717000                | jenkins | v1.31.2 | 14 Sep 23 15:10 PDT | 14 Sep 23 15:11 PDT |
	|                |                                          |                             |         |         |                     |                     |
	| image          | build -t aaa:latest                      | image-717000                | jenkins | v1.31.2 | 14 Sep 23 15:11 PDT | 14 Sep 23 15:11 PDT |
	|                | ./testdata/image-build/test-normal       |                             |         |         |                     |                     |
	|                | -p image-717000                          |                             |         |         |                     |                     |
	| image          | build -t aaa:latest                      | image-717000                | jenkins | v1.31.2 | 14 Sep 23 15:11 PDT | 14 Sep 23 15:11 PDT |
	|                | --build-opt=build-arg=ENV_A=test_env_str |                             |         |         |                     |                     |
	|                | --build-opt=no-cache                     |                             |         |         |                     |                     |
	|                | ./testdata/image-build/test-arg -p       |                             |         |         |                     |                     |
	|                | image-717000                             |                             |         |         |                     |                     |
	| image          | build -t aaa:latest                      | image-717000                | jenkins | v1.31.2 | 14 Sep 23 15:11 PDT | 14 Sep 23 15:11 PDT |
	|                | ./testdata/image-build/test-normal       |                             |         |         |                     |                     |
	|                | --build-opt=no-cache -p                  |                             |         |         |                     |                     |
	|                | image-717000                             |                             |         |         |                     |                     |
	| image          | build -t aaa:latest                      | image-717000                | jenkins | v1.31.2 | 14 Sep 23 15:11 PDT | 14 Sep 23 15:11 PDT |
	|                | -f inner/Dockerfile                      |                             |         |         |                     |                     |
	|                | ./testdata/image-build/test-f            |                             |         |         |                     |                     |
	|                | -p image-717000                          |                             |         |         |                     |                     |
	| delete         | -p image-717000                          | image-717000                | jenkins | v1.31.2 | 14 Sep 23 15:11 PDT | 14 Sep 23 15:11 PDT |
	| start          | -p ingress-addon-legacy-438000           | ingress-addon-legacy-438000 | jenkins | v1.31.2 | 14 Sep 23 15:11 PDT | 14 Sep 23 15:12 PDT |
	|                | --kubernetes-version=v1.18.20            |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true                |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                   |                             |         |         |                     |                     |
	|                | --driver=qemu2                           |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-438000              | ingress-addon-legacy-438000 | jenkins | v1.31.2 | 14 Sep 23 15:12 PDT | 14 Sep 23 15:12 PDT |
	|                | addons enable ingress                    |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                   |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-438000              | ingress-addon-legacy-438000 | jenkins | v1.31.2 | 14 Sep 23 15:12 PDT | 14 Sep 23 15:12 PDT |
	|                | addons enable ingress-dns                |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                   |                             |         |         |                     |                     |
	| ssh            | ingress-addon-legacy-438000              | ingress-addon-legacy-438000 | jenkins | v1.31.2 | 14 Sep 23 15:12 PDT | 14 Sep 23 15:12 PDT |
	|                | ssh curl -s http://127.0.0.1/            |                             |         |         |                     |                     |
	|                | -H 'Host: nginx.example.com'             |                             |         |         |                     |                     |
	| ip             | ingress-addon-legacy-438000 ip           | ingress-addon-legacy-438000 | jenkins | v1.31.2 | 14 Sep 23 15:12 PDT | 14 Sep 23 15:12 PDT |
	| addons         | ingress-addon-legacy-438000              | ingress-addon-legacy-438000 | jenkins | v1.31.2 | 14 Sep 23 15:13 PDT | 14 Sep 23 15:13 PDT |
	|                | addons disable ingress-dns               |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                   |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-438000              | ingress-addon-legacy-438000 | jenkins | v1.31.2 | 14 Sep 23 15:13 PDT | 14 Sep 23 15:13 PDT |
	|                | addons disable ingress                   |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                   |                             |         |         |                     |                     |
	|----------------|------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/14 15:11:04
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.21.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0914 15:11:04.659411    3333 out.go:296] Setting OutFile to fd 1 ...
	I0914 15:11:04.659549    3333 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 15:11:04.659551    3333 out.go:309] Setting ErrFile to fd 2...
	I0914 15:11:04.659554    3333 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 15:11:04.659721    3333 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17243-1006/.minikube/bin
	I0914 15:11:04.660888    3333 out.go:303] Setting JSON to false
	I0914 15:11:04.677457    3333 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2438,"bootTime":1694727026,"procs":411,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0914 15:11:04.677559    3333 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0914 15:11:04.681682    3333 out.go:177] * [ingress-addon-legacy-438000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0914 15:11:04.690817    3333 out.go:177]   - MINIKUBE_LOCATION=17243
	I0914 15:11:04.690932    3333 notify.go:220] Checking for updates...
	I0914 15:11:04.697588    3333 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17243-1006/kubeconfig
	I0914 15:11:04.701651    3333 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 15:11:04.704685    3333 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 15:11:04.707590    3333 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17243-1006/.minikube
	I0914 15:11:04.710622    3333 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 15:11:04.713846    3333 driver.go:373] Setting default libvirt URI to qemu:///system
	I0914 15:11:04.716551    3333 out.go:177] * Using the qemu2 driver based on user configuration
	I0914 15:11:04.723741    3333 start.go:298] selected driver: qemu2
	I0914 15:11:04.723745    3333 start.go:902] validating driver "qemu2" against <nil>
	I0914 15:11:04.723751    3333 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 15:11:04.725734    3333 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0914 15:11:04.727047    3333 out.go:177] * Automatically selected the socket_vmnet network
	I0914 15:11:04.729747    3333 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 15:11:04.729777    3333 cni.go:84] Creating CNI manager for ""
	I0914 15:11:04.729785    3333 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0914 15:11:04.729789    3333 start_flags.go:321] config:
	{Name:ingress-addon-legacy-438000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-438000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 15:11:04.733960    3333 iso.go:125] acquiring lock: {Name:mkd5b68b252c18146a825ff5365883948b3c6983 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 15:11:04.741619    3333 out.go:177] * Starting control plane node ingress-addon-legacy-438000 in cluster ingress-addon-legacy-438000
	I0914 15:11:04.745662    3333 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0914 15:11:04.951805    3333 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4
	I0914 15:11:04.951860    3333 cache.go:57] Caching tarball of preloaded images
	I0914 15:11:04.952502    3333 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0914 15:11:04.960943    3333 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0914 15:11:04.964953    3333 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I0914 15:11:05.191365    3333 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4?checksum=md5:c8c260b886393123ce9d312d8ac2379e -> /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4
	I0914 15:11:14.442873    3333 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I0914 15:11:14.443005    3333 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I0914 15:11:15.195586    3333 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on docker
	I0914 15:11:15.195819    3333 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/ingress-addon-legacy-438000/config.json ...
	I0914 15:11:15.195841    3333 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/ingress-addon-legacy-438000/config.json: {Name:mk80c94662b08a68e482d28e4df0de32af9f1227 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 15:11:15.196081    3333 start.go:365] acquiring machines lock for ingress-addon-legacy-438000: {Name:mk831f2bd5a9a0c932c8a59972319d68edb53d07 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 15:11:15.196110    3333 start.go:369] acquired machines lock for "ingress-addon-legacy-438000" in 23.292µs
	I0914 15:11:15.196123    3333 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-438000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 K
ubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-438000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:doc
ker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 15:11:15.196163    3333 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 15:11:15.200201    3333 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I0914 15:11:15.214549    3333 start.go:159] libmachine.API.Create for "ingress-addon-legacy-438000" (driver="qemu2")
	I0914 15:11:15.214574    3333 client.go:168] LocalClient.Create starting
	I0914 15:11:15.214656    3333 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/ca.pem
	I0914 15:11:15.214682    3333 main.go:141] libmachine: Decoding PEM data...
	I0914 15:11:15.214694    3333 main.go:141] libmachine: Parsing certificate...
	I0914 15:11:15.214738    3333 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/cert.pem
	I0914 15:11:15.214755    3333 main.go:141] libmachine: Decoding PEM data...
	I0914 15:11:15.214763    3333 main.go:141] libmachine: Parsing certificate...
	I0914 15:11:15.215167    3333 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17243-1006/.minikube/cache/iso/arm64/minikube-v1.31.0-1694625400-17243-arm64.iso...
	I0914 15:11:15.329068    3333 main.go:141] libmachine: Creating SSH key...
	I0914 15:11:15.366014    3333 main.go:141] libmachine: Creating Disk image...
	I0914 15:11:15.366019    3333 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 15:11:15.366164    3333 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/ingress-addon-legacy-438000/disk.qcow2.raw /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/ingress-addon-legacy-438000/disk.qcow2
	I0914 15:11:15.374538    3333 main.go:141] libmachine: STDOUT: 
	I0914 15:11:15.374552    3333 main.go:141] libmachine: STDERR: 
	I0914 15:11:15.374607    3333 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/ingress-addon-legacy-438000/disk.qcow2 +20000M
	I0914 15:11:15.381682    3333 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 15:11:15.381694    3333 main.go:141] libmachine: STDERR: 
	I0914 15:11:15.381709    3333 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/ingress-addon-legacy-438000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/ingress-addon-legacy-438000/disk.qcow2
	I0914 15:11:15.381722    3333 main.go:141] libmachine: Starting QEMU VM...
	I0914 15:11:15.381759    3333 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4096 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/ingress-addon-legacy-438000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/ingress-addon-legacy-438000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/ingress-addon-legacy-438000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:3a:6d:8d:3f:19 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/ingress-addon-legacy-438000/disk.qcow2
	I0914 15:11:15.415776    3333 main.go:141] libmachine: STDOUT: 
	I0914 15:11:15.415802    3333 main.go:141] libmachine: STDERR: 
	I0914 15:11:15.415806    3333 main.go:141] libmachine: Attempt 0
	I0914 15:11:15.415828    3333 main.go:141] libmachine: Searching for 16:3a:6d:8d:3f:19 in /var/db/dhcpd_leases ...
	I0914 15:11:15.415909    3333 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0914 15:11:15.415928    3333 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:4a:f2:f2:dc:10:93 ID:1,4a:f2:f2:dc:10:93 Lease:0x6504d665}
	I0914 15:11:15.415937    3333 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:42:d7:28:b6:10:34 ID:1,42:d7:28:b6:10:34 Lease:0x6504d4f6}
	I0914 15:11:15.415943    3333 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:a2:d7:76:1d:83:30 ID:1,a2:d7:76:1d:83:30 Lease:0x65038369}
	I0914 15:11:15.415948    3333 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:fa:ab:b1:c2:6f:25 ID:1,fa:ab:b1:c2:6f:25 Lease:0x6503833e}
	I0914 15:11:17.418133    3333 main.go:141] libmachine: Attempt 1
	I0914 15:11:17.418217    3333 main.go:141] libmachine: Searching for 16:3a:6d:8d:3f:19 in /var/db/dhcpd_leases ...
	I0914 15:11:17.418640    3333 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0914 15:11:17.418693    3333 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:4a:f2:f2:dc:10:93 ID:1,4a:f2:f2:dc:10:93 Lease:0x6504d665}
	I0914 15:11:17.418728    3333 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:42:d7:28:b6:10:34 ID:1,42:d7:28:b6:10:34 Lease:0x6504d4f6}
	I0914 15:11:17.418806    3333 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:a2:d7:76:1d:83:30 ID:1,a2:d7:76:1d:83:30 Lease:0x65038369}
	I0914 15:11:17.418841    3333 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:fa:ab:b1:c2:6f:25 ID:1,fa:ab:b1:c2:6f:25 Lease:0x6503833e}
	I0914 15:11:19.421019    3333 main.go:141] libmachine: Attempt 2
	I0914 15:11:19.421060    3333 main.go:141] libmachine: Searching for 16:3a:6d:8d:3f:19 in /var/db/dhcpd_leases ...
	I0914 15:11:19.421182    3333 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0914 15:11:19.421195    3333 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:4a:f2:f2:dc:10:93 ID:1,4a:f2:f2:dc:10:93 Lease:0x6504d665}
	I0914 15:11:19.421216    3333 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:42:d7:28:b6:10:34 ID:1,42:d7:28:b6:10:34 Lease:0x6504d4f6}
	I0914 15:11:19.421223    3333 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:a2:d7:76:1d:83:30 ID:1,a2:d7:76:1d:83:30 Lease:0x65038369}
	I0914 15:11:19.421229    3333 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:fa:ab:b1:c2:6f:25 ID:1,fa:ab:b1:c2:6f:25 Lease:0x6503833e}
	I0914 15:11:21.423345    3333 main.go:141] libmachine: Attempt 3
	I0914 15:11:21.423395    3333 main.go:141] libmachine: Searching for 16:3a:6d:8d:3f:19 in /var/db/dhcpd_leases ...
	I0914 15:11:21.423452    3333 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0914 15:11:21.423459    3333 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:4a:f2:f2:dc:10:93 ID:1,4a:f2:f2:dc:10:93 Lease:0x6504d665}
	I0914 15:11:21.423468    3333 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:42:d7:28:b6:10:34 ID:1,42:d7:28:b6:10:34 Lease:0x6504d4f6}
	I0914 15:11:21.423472    3333 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:a2:d7:76:1d:83:30 ID:1,a2:d7:76:1d:83:30 Lease:0x65038369}
	I0914 15:11:21.423477    3333 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:fa:ab:b1:c2:6f:25 ID:1,fa:ab:b1:c2:6f:25 Lease:0x6503833e}
	I0914 15:11:23.425551    3333 main.go:141] libmachine: Attempt 4
	I0914 15:11:23.425566    3333 main.go:141] libmachine: Searching for 16:3a:6d:8d:3f:19 in /var/db/dhcpd_leases ...
	I0914 15:11:23.425607    3333 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0914 15:11:23.425616    3333 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:4a:f2:f2:dc:10:93 ID:1,4a:f2:f2:dc:10:93 Lease:0x6504d665}
	I0914 15:11:23.425621    3333 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:42:d7:28:b6:10:34 ID:1,42:d7:28:b6:10:34 Lease:0x6504d4f6}
	I0914 15:11:23.425627    3333 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:a2:d7:76:1d:83:30 ID:1,a2:d7:76:1d:83:30 Lease:0x65038369}
	I0914 15:11:23.425632    3333 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:fa:ab:b1:c2:6f:25 ID:1,fa:ab:b1:c2:6f:25 Lease:0x6503833e}
	I0914 15:11:25.427718    3333 main.go:141] libmachine: Attempt 5
	I0914 15:11:25.427734    3333 main.go:141] libmachine: Searching for 16:3a:6d:8d:3f:19 in /var/db/dhcpd_leases ...
	I0914 15:11:25.427809    3333 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0914 15:11:25.427819    3333 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:4a:f2:f2:dc:10:93 ID:1,4a:f2:f2:dc:10:93 Lease:0x6504d665}
	I0914 15:11:25.427826    3333 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:42:d7:28:b6:10:34 ID:1,42:d7:28:b6:10:34 Lease:0x6504d4f6}
	I0914 15:11:25.427832    3333 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:a2:d7:76:1d:83:30 ID:1,a2:d7:76:1d:83:30 Lease:0x65038369}
	I0914 15:11:25.427837    3333 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:fa:ab:b1:c2:6f:25 ID:1,fa:ab:b1:c2:6f:25 Lease:0x6503833e}
	I0914 15:11:27.429913    3333 main.go:141] libmachine: Attempt 6
	I0914 15:11:27.429969    3333 main.go:141] libmachine: Searching for 16:3a:6d:8d:3f:19 in /var/db/dhcpd_leases ...
	I0914 15:11:27.430098    3333 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I0914 15:11:27.430112    3333 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:16:3a:6d:8d:3f:19 ID:1,16:3a:6d:8d:3f:19 Lease:0x6504d68e}
	I0914 15:11:27.430118    3333 main.go:141] libmachine: Found match: 16:3a:6d:8d:3f:19
	I0914 15:11:27.430130    3333 main.go:141] libmachine: IP: 192.168.105.6
	I0914 15:11:27.430137    3333 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.6)...
	I0914 15:11:28.436225    3333 machine.go:88] provisioning docker machine ...
	I0914 15:11:28.436249    3333 buildroot.go:166] provisioning hostname "ingress-addon-legacy-438000"
	I0914 15:11:28.436297    3333 main.go:141] libmachine: Using SSH client type: native
	I0914 15:11:28.436549    3333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104c34760] 0x104c36ed0 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0914 15:11:28.436556    3333 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-438000 && echo "ingress-addon-legacy-438000" | sudo tee /etc/hostname
	I0914 15:11:28.506036    3333 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-438000
	
	I0914 15:11:28.506107    3333 main.go:141] libmachine: Using SSH client type: native
	I0914 15:11:28.506344    3333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104c34760] 0x104c36ed0 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0914 15:11:28.506353    3333 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-438000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-438000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-438000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 15:11:28.572015    3333 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 15:11:28.572025    3333 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17243-1006/.minikube CaCertPath:/Users/jenkins/minikube-integration/17243-1006/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17243-1006/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17243-1006/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17243-1006/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17243-1006/.minikube}
	I0914 15:11:28.572033    3333 buildroot.go:174] setting up certificates
	I0914 15:11:28.572038    3333 provision.go:83] configureAuth start
	I0914 15:11:28.572043    3333 provision.go:138] copyHostCerts
	I0914 15:11:28.572070    3333 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/17243-1006/.minikube/cert.pem
	I0914 15:11:28.572113    3333 exec_runner.go:144] found /Users/jenkins/minikube-integration/17243-1006/.minikube/cert.pem, removing ...
	I0914 15:11:28.572120    3333 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17243-1006/.minikube/cert.pem
	I0914 15:11:28.572237    3333 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17243-1006/.minikube/cert.pem (1123 bytes)
	I0914 15:11:28.572388    3333 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/17243-1006/.minikube/key.pem
	I0914 15:11:28.572410    3333 exec_runner.go:144] found /Users/jenkins/minikube-integration/17243-1006/.minikube/key.pem, removing ...
	I0914 15:11:28.572414    3333 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17243-1006/.minikube/key.pem
	I0914 15:11:28.572462    3333 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17243-1006/.minikube/key.pem (1675 bytes)
	I0914 15:11:28.572537    3333 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/17243-1006/.minikube/ca.pem
	I0914 15:11:28.572556    3333 exec_runner.go:144] found /Users/jenkins/minikube-integration/17243-1006/.minikube/ca.pem, removing ...
	I0914 15:11:28.572558    3333 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17243-1006/.minikube/ca.pem
	I0914 15:11:28.572602    3333 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17243-1006/.minikube/ca.pem (1082 bytes)
	I0914 15:11:28.572675    3333 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17243-1006/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17243-1006/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-438000 san=[192.168.105.6 192.168.105.6 localhost 127.0.0.1 minikube ingress-addon-legacy-438000]
	I0914 15:11:28.685027    3333 provision.go:172] copyRemoteCerts
	I0914 15:11:28.685053    3333 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 15:11:28.685060    3333 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/ingress-addon-legacy-438000/id_rsa Username:docker}
	I0914 15:11:28.719440    3333 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0914 15:11:28.719490    3333 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0914 15:11:28.726154    3333 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0914 15:11:28.726194    3333 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0914 15:11:28.732844    3333 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0914 15:11:28.732884    3333 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0914 15:11:28.740109    3333 provision.go:86] duration metric: configureAuth took 168.067083ms
	I0914 15:11:28.740116    3333 buildroot.go:189] setting minikube options for container-runtime
	I0914 15:11:28.740216    3333 config.go:182] Loaded profile config "ingress-addon-legacy-438000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0914 15:11:28.740249    3333 main.go:141] libmachine: Using SSH client type: native
	I0914 15:11:28.740476    3333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104c34760] 0x104c36ed0 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0914 15:11:28.740480    3333 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0914 15:11:28.806356    3333 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0914 15:11:28.806363    3333 buildroot.go:70] root file system type: tmpfs
	I0914 15:11:28.806416    3333 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0914 15:11:28.806461    3333 main.go:141] libmachine: Using SSH client type: native
	I0914 15:11:28.806706    3333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104c34760] 0x104c36ed0 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0914 15:11:28.806740    3333 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0914 15:11:28.874262    3333 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0914 15:11:28.874330    3333 main.go:141] libmachine: Using SSH client type: native
	I0914 15:11:28.874598    3333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104c34760] 0x104c36ed0 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0914 15:11:28.874607    3333 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0914 15:11:29.251411    3333 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0914 15:11:29.251426    3333 machine.go:91] provisioned docker machine in 815.198834ms
	I0914 15:11:29.251432    3333 client.go:171] LocalClient.Create took 14.036939916s
	I0914 15:11:29.251447    3333 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-438000" took 14.036991291s
	I0914 15:11:29.251452    3333 start.go:300] post-start starting for "ingress-addon-legacy-438000" (driver="qemu2")
	I0914 15:11:29.251457    3333 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 15:11:29.251530    3333 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 15:11:29.251539    3333 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/ingress-addon-legacy-438000/id_rsa Username:docker}
	I0914 15:11:29.286893    3333 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 15:11:29.288273    3333 info.go:137] Remote host: Buildroot 2021.02.12
	I0914 15:11:29.288282    3333 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17243-1006/.minikube/addons for local assets ...
	I0914 15:11:29.288351    3333 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17243-1006/.minikube/files for local assets ...
	I0914 15:11:29.288455    3333 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17243-1006/.minikube/files/etc/ssl/certs/14252.pem -> 14252.pem in /etc/ssl/certs
	I0914 15:11:29.288463    3333 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17243-1006/.minikube/files/etc/ssl/certs/14252.pem -> /etc/ssl/certs/14252.pem
	I0914 15:11:29.288570    3333 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 15:11:29.291302    3333 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17243-1006/.minikube/files/etc/ssl/certs/14252.pem --> /etc/ssl/certs/14252.pem (1708 bytes)
	I0914 15:11:29.297897    3333 start.go:303] post-start completed in 46.440583ms
	I0914 15:11:29.298272    3333 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/ingress-addon-legacy-438000/config.json ...
	I0914 15:11:29.298426    3333 start.go:128] duration metric: createHost completed in 14.102348667s
	I0914 15:11:29.298450    3333 main.go:141] libmachine: Using SSH client type: native
	I0914 15:11:29.298663    3333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104c34760] 0x104c36ed0 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0914 15:11:29.298668    3333 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0914 15:11:29.362061    3333 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694729489.550993918
	
	I0914 15:11:29.362069    3333 fix.go:206] guest clock: 1694729489.550993918
	I0914 15:11:29.362074    3333 fix.go:219] Guest: 2023-09-14 15:11:29.550993918 -0700 PDT Remote: 2023-09-14 15:11:29.298429 -0700 PDT m=+24.659873793 (delta=252.564918ms)
	I0914 15:11:29.362085    3333 fix.go:190] guest clock delta is within tolerance: 252.564918ms
	I0914 15:11:29.362087    3333 start.go:83] releasing machines lock for "ingress-addon-legacy-438000", held for 14.16606325s
	I0914 15:11:29.362352    3333 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 15:11:29.362374    3333 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/ingress-addon-legacy-438000/id_rsa Username:docker}
	I0914 15:11:29.362352    3333 ssh_runner.go:195] Run: cat /version.json
	I0914 15:11:29.362394    3333 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/ingress-addon-legacy-438000/id_rsa Username:docker}
	I0914 15:11:29.437429    3333 ssh_runner.go:195] Run: systemctl --version
	I0914 15:11:29.439633    3333 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 15:11:29.441513    3333 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 15:11:29.441540    3333 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0914 15:11:29.444813    3333 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0914 15:11:29.449608    3333 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 15:11:29.449615    3333 start.go:469] detecting cgroup driver to use...
	I0914 15:11:29.449669    3333 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 15:11:29.455913    3333 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0914 15:11:29.459338    3333 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0914 15:11:29.462462    3333 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0914 15:11:29.462505    3333 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0914 15:11:29.465316    3333 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0914 15:11:29.468519    3333 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0914 15:11:29.471687    3333 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0914 15:11:29.474485    3333 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 15:11:29.477231    3333 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0914 15:11:29.480485    3333 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 15:11:29.483375    3333 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 15:11:29.485956    3333 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 15:11:29.564088    3333 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0914 15:11:29.570397    3333 start.go:469] detecting cgroup driver to use...
	I0914 15:11:29.570453    3333 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0914 15:11:29.576339    3333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 15:11:29.582663    3333 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 15:11:29.589949    3333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 15:11:29.594128    3333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0914 15:11:29.598731    3333 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0914 15:11:29.636789    3333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0914 15:11:29.641937    3333 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 15:11:29.647224    3333 ssh_runner.go:195] Run: which cri-dockerd
	I0914 15:11:29.648643    3333 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0914 15:11:29.651797    3333 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0914 15:11:29.657000    3333 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0914 15:11:29.726859    3333 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0914 15:11:29.806582    3333 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0914 15:11:29.806596    3333 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0914 15:11:29.811959    3333 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 15:11:29.881358    3333 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0914 15:11:31.049640    3333 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.168279416s)
	I0914 15:11:31.049714    3333 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0914 15:11:31.066554    3333 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0914 15:11:31.085770    3333 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 24.0.6 ...
	I0914 15:11:31.085901    3333 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0914 15:11:31.087309    3333 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 15:11:31.090986    3333 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0914 15:11:31.091033    3333 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0914 15:11:31.096622    3333 docker.go:636] Got preloaded images: 
	I0914 15:11:31.096628    3333 docker.go:642] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I0914 15:11:31.096663    3333 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0914 15:11:31.100072    3333 ssh_runner.go:195] Run: which lz4
	I0914 15:11:31.101517    3333 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 -> /preloaded.tar.lz4
	I0914 15:11:31.101601    3333 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0914 15:11:31.103023    3333 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0914 15:11:31.103033    3333 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (459739018 bytes)
	I0914 15:11:32.804467    3333 docker.go:600] Took 1.702921 seconds to copy over tarball
	I0914 15:11:32.804519    3333 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0914 15:11:34.117786    3333 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.3132645s)
	I0914 15:11:34.117803    3333 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0914 15:11:34.139211    3333 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0914 15:11:34.144472    3333 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2502 bytes)
	I0914 15:11:34.152491    3333 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 15:11:34.234294    3333 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0914 15:11:35.697607    3333 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.463315375s)
	I0914 15:11:35.697706    3333 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0914 15:11:35.703433    3333 docker.go:636] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0914 15:11:35.703442    3333 docker.go:642] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I0914 15:11:35.703446    3333 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0914 15:11:35.713574    3333 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0914 15:11:35.713604    3333 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0914 15:11:35.713642    3333 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0914 15:11:35.713697    3333 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0914 15:11:35.713785    3333 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0914 15:11:35.713809    3333 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0914 15:11:35.714283    3333 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0914 15:11:35.714317    3333 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 15:11:35.721890    3333 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0914 15:11:35.721944    3333 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0914 15:11:35.721963    3333 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0914 15:11:35.722015    3333 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0914 15:11:35.722053    3333 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I0914 15:11:35.722828    3333 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0914 15:11:35.727983    3333 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I0914 15:11:35.728001    3333 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	W0914 15:11:36.314286    3333 image.go:265] image registry.k8s.io/etcd:3.4.3-0 arch mismatch: want arm64 got amd64. fixing
	I0914 15:11:36.314426    3333 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I0914 15:11:36.320314    3333 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "29dd247b2572efbe28fcaea3fef1c5d72593da59f7350e3f6d2e6618983f9c03" in container runtime
	I0914 15:11:36.320341    3333 docker.go:316] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0914 15:11:36.320382    3333 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.4.3-0
	I0914 15:11:36.326515    3333 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.3-0
	W0914 15:11:36.354487    3333 image.go:265] image registry.k8s.io/kube-proxy:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0914 15:11:36.354619    3333 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I0914 15:11:36.360847    3333 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "b11cdc97ac6ac4ef2b3b0662edbe16597084b17cbc8e3d61fcaf4ef827a7ed18" in container runtime
	I0914 15:11:36.360870    3333 docker.go:316] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0914 15:11:36.360914    3333 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.18.20
	I0914 15:11:36.367019    3333 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.18.20
	W0914 15:11:36.546201    3333 image.go:265] image registry.k8s.io/coredns:1.6.7 arch mismatch: want arm64 got amd64. fixing
	I0914 15:11:36.546315    3333 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I0914 15:11:36.552814    3333 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "ff3af22d8878afc6985d3fec3e066d00ef431aa166c3a01ac58f1990adc92a2c" in container runtime
	I0914 15:11:36.552837    3333 docker.go:316] Removing image: registry.k8s.io/coredns:1.6.7
	I0914 15:11:36.552878    3333 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.7
	I0914 15:11:36.558910    3333 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.6.7
	I0914 15:11:36.734568    3333 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0914 15:11:36.740603    3333 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "2a060e2e7101d419352bf82c613158587400be743482d9a537ec4a9d1b4eb93c" in container runtime
	I0914 15:11:36.740630    3333 docker.go:316] Removing image: registry.k8s.io/pause:3.2
	I0914 15:11:36.740695    3333 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.2
	I0914 15:11:36.746829    3333 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	W0914 15:11:36.959609    3333 image.go:265] image registry.k8s.io/kube-scheduler:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0914 15:11:36.959729    3333 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I0914 15:11:36.966177    3333 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "177548d745cb87f773d02f41d453af2f2a1479dbe3c32e749cf6d8145c005e79" in container runtime
	I0914 15:11:36.966205    3333 docker.go:316] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0914 15:11:36.966251    3333 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0914 15:11:36.972304    3333 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.18.20
	W0914 15:11:37.150118    3333 image.go:265] image registry.k8s.io/kube-controller-manager:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0914 15:11:37.150228    3333 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I0914 15:11:37.156027    3333 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "297c79afbdb81ceb4cf857e0c54a0de7b6ce7ebe01e6cab68fc8baf342be3ea7" in container runtime
	I0914 15:11:37.156059    3333 docker.go:316] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0914 15:11:37.156104    3333 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0914 15:11:37.166150    3333 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.18.20
	W0914 15:11:37.402127    3333 image.go:265] image registry.k8s.io/kube-apiserver:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0914 15:11:37.402255    3333 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I0914 15:11:37.407981    3333 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "d353007847ec85700463981309a5846c8d9c93fbcd1323104266212926d68257" in container runtime
	I0914 15:11:37.408002    3333 docker.go:316] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0914 15:11:37.408039    3333 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0914 15:11:37.413381    3333 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.18.20
	W0914 15:11:38.236114    3333 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0914 15:11:38.236698    3333 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 15:11:38.261646    3333 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0914 15:11:38.261712    3333 docker.go:316] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 15:11:38.261840    3333 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 15:11:38.285781    3333 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0914 15:11:38.285881    3333 cache_images.go:92] LoadImages completed in 2.582457167s
	W0914 15:11:38.285960    3333 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.3-0: no such file or directory
	I0914 15:11:38.286059    3333 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0914 15:11:38.300817    3333 cni.go:84] Creating CNI manager for ""
	I0914 15:11:38.300834    3333 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0914 15:11:38.300846    3333 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0914 15:11:38.300860    3333 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.6 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-438000 NodeName:ingress-addon-legacy-438000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.6"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.6 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0914 15:11:38.300979    3333 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.6
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "ingress-addon-legacy-438000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.6
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.6"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 15:11:38.301035    3333 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-438000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-438000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0914 15:11:38.301114    3333 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0914 15:11:38.305452    3333 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 15:11:38.305490    3333 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 15:11:38.309146    3333 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (355 bytes)
	I0914 15:11:38.315443    3333 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0914 15:11:38.321128    3333 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2127 bytes)
	I0914 15:11:38.326633    3333 ssh_runner.go:195] Run: grep 192.168.105.6	control-plane.minikube.internal$ /etc/hosts
	I0914 15:11:38.327898    3333 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.6	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 15:11:38.331736    3333 certs.go:56] Setting up /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/ingress-addon-legacy-438000 for IP: 192.168.105.6
	I0914 15:11:38.331748    3333 certs.go:190] acquiring lock for shared ca certs: {Name:mkd19d6e2143685b57ba1e0d43c4081bbdb26a79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 15:11:38.331890    3333 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/17243-1006/.minikube/ca.key
	I0914 15:11:38.331930    3333 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/17243-1006/.minikube/proxy-client-ca.key
	I0914 15:11:38.331955    3333 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/ingress-addon-legacy-438000/client.key
	I0914 15:11:38.331962    3333 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/ingress-addon-legacy-438000/client.crt with IP's: []
	I0914 15:11:38.402265    3333 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/ingress-addon-legacy-438000/client.crt ...
	I0914 15:11:38.402269    3333 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/ingress-addon-legacy-438000/client.crt: {Name:mk35c2731e1580d8a18bee4a0d153127dfe44166 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 15:11:38.402500    3333 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/ingress-addon-legacy-438000/client.key ...
	I0914 15:11:38.402505    3333 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/ingress-addon-legacy-438000/client.key: {Name:mk212c21886cbaa3637f7718a4b94be471eccf0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 15:11:38.402633    3333 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/ingress-addon-legacy-438000/apiserver.key.b354f644
	I0914 15:11:38.402645    3333 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/ingress-addon-legacy-438000/apiserver.crt.b354f644 with IP's: [192.168.105.6 10.96.0.1 127.0.0.1 10.0.0.1]
	I0914 15:11:38.614928    3333 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/ingress-addon-legacy-438000/apiserver.crt.b354f644 ...
	I0914 15:11:38.614944    3333 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/ingress-addon-legacy-438000/apiserver.crt.b354f644: {Name:mk69aa50f55a2593c7d2a34afca5e585f1d6ec3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 15:11:38.615301    3333 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/ingress-addon-legacy-438000/apiserver.key.b354f644 ...
	I0914 15:11:38.615305    3333 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/ingress-addon-legacy-438000/apiserver.key.b354f644: {Name:mk2e5e0d035126440e87c8a7883c0bea25a03f17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 15:11:38.615410    3333 certs.go:337] copying /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/ingress-addon-legacy-438000/apiserver.crt.b354f644 -> /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/ingress-addon-legacy-438000/apiserver.crt
	I0914 15:11:38.615674    3333 certs.go:341] copying /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/ingress-addon-legacy-438000/apiserver.key.b354f644 -> /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/ingress-addon-legacy-438000/apiserver.key
	I0914 15:11:38.615772    3333 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/ingress-addon-legacy-438000/proxy-client.key
	I0914 15:11:38.615782    3333 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/ingress-addon-legacy-438000/proxy-client.crt with IP's: []
	I0914 15:11:38.733187    3333 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/ingress-addon-legacy-438000/proxy-client.crt ...
	I0914 15:11:38.733191    3333 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/ingress-addon-legacy-438000/proxy-client.crt: {Name:mk643e042a885cd30b61269365aff380b6e72793 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 15:11:38.733330    3333 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/ingress-addon-legacy-438000/proxy-client.key ...
	I0914 15:11:38.733333    3333 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/ingress-addon-legacy-438000/proxy-client.key: {Name:mk1a2da36791751fa6de9574d0765ca8fd529efe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 15:11:38.733431    3333 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/ingress-addon-legacy-438000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0914 15:11:38.733446    3333 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/ingress-addon-legacy-438000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0914 15:11:38.733462    3333 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/ingress-addon-legacy-438000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0914 15:11:38.733474    3333 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/ingress-addon-legacy-438000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0914 15:11:38.733486    3333 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17243-1006/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0914 15:11:38.733501    3333 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17243-1006/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0914 15:11:38.733512    3333 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17243-1006/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0914 15:11:38.733523    3333 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17243-1006/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0914 15:11:38.733595    3333 certs.go:437] found cert: /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/Users/jenkins/minikube-integration/17243-1006/.minikube/certs/1425.pem (1338 bytes)
	W0914 15:11:38.733623    3333 certs.go:433] ignoring /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/Users/jenkins/minikube-integration/17243-1006/.minikube/certs/1425_empty.pem, impossibly tiny 0 bytes
	I0914 15:11:38.733630    3333 certs.go:437] found cert: /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/Users/jenkins/minikube-integration/17243-1006/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 15:11:38.733650    3333 certs.go:437] found cert: /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/Users/jenkins/minikube-integration/17243-1006/.minikube/certs/ca.pem (1082 bytes)
	I0914 15:11:38.733669    3333 certs.go:437] found cert: /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/Users/jenkins/minikube-integration/17243-1006/.minikube/certs/cert.pem (1123 bytes)
	I0914 15:11:38.733693    3333 certs.go:437] found cert: /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/Users/jenkins/minikube-integration/17243-1006/.minikube/certs/key.pem (1675 bytes)
	I0914 15:11:38.733735    3333 certs.go:437] found cert: /Users/jenkins/minikube-integration/17243-1006/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/17243-1006/.minikube/files/etc/ssl/certs/14252.pem (1708 bytes)
	I0914 15:11:38.733754    3333 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17243-1006/.minikube/files/etc/ssl/certs/14252.pem -> /usr/share/ca-certificates/14252.pem
	I0914 15:11:38.733769    3333 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17243-1006/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0914 15:11:38.733779    3333 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/1425.pem -> /usr/share/ca-certificates/1425.pem
	I0914 15:11:38.734089    3333 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/ingress-addon-legacy-438000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0914 15:11:38.741607    3333 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/ingress-addon-legacy-438000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0914 15:11:38.748424    3333 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/ingress-addon-legacy-438000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 15:11:38.755790    3333 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/ingress-addon-legacy-438000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0914 15:11:38.762790    3333 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17243-1006/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 15:11:38.769387    3333 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17243-1006/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0914 15:11:38.776321    3333 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17243-1006/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 15:11:38.783613    3333 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17243-1006/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0914 15:11:38.790718    3333 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17243-1006/.minikube/files/etc/ssl/certs/14252.pem --> /usr/share/ca-certificates/14252.pem (1708 bytes)
	I0914 15:11:38.797434    3333 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17243-1006/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 15:11:38.804361    3333 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/1425.pem --> /usr/share/ca-certificates/1425.pem (1338 bytes)
	I0914 15:11:38.811388    3333 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 15:11:38.816400    3333 ssh_runner.go:195] Run: openssl version
	I0914 15:11:38.818438    3333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14252.pem && ln -fs /usr/share/ca-certificates/14252.pem /etc/ssl/certs/14252.pem"
	I0914 15:11:38.821197    3333 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14252.pem
	I0914 15:11:38.822734    3333 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 14 22:04 /usr/share/ca-certificates/14252.pem
	I0914 15:11:38.822754    3333 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14252.pem
	I0914 15:11:38.824451    3333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14252.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 15:11:38.827801    3333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 15:11:38.830881    3333 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 15:11:38.832368    3333 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 14 21:36 /usr/share/ca-certificates/minikubeCA.pem
	I0914 15:11:38.832386    3333 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 15:11:38.834225    3333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 15:11:38.837084    3333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1425.pem && ln -fs /usr/share/ca-certificates/1425.pem /etc/ssl/certs/1425.pem"
	I0914 15:11:38.840545    3333 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1425.pem
	I0914 15:11:38.841989    3333 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 14 22:04 /usr/share/ca-certificates/1425.pem
	I0914 15:11:38.842012    3333 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1425.pem
	I0914 15:11:38.843682    3333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1425.pem /etc/ssl/certs/51391683.0"
	I0914 15:11:38.846729    3333 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0914 15:11:38.848077    3333 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0914 15:11:38.848105    3333 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-438000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.18.20 ClusterName:ingress-addon-legacy-438000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.6 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 15:11:38.848170    3333 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0914 15:11:38.853514    3333 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 15:11:38.856758    3333 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 15:11:38.859869    3333 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 15:11:38.862692    3333 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 15:11:38.862711    3333 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0914 15:11:38.888696    3333 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0914 15:11:38.888841    3333 kubeadm.go:322] [preflight] Running pre-flight checks
	I0914 15:11:38.970854    3333 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 15:11:38.970911    3333 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 15:11:38.970959    3333 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0914 15:11:39.015833    3333 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 15:11:39.017297    3333 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 15:11:39.017318    3333 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0914 15:11:39.109625    3333 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 15:11:39.114307    3333 out.go:204]   - Generating certificates and keys ...
	I0914 15:11:39.114359    3333 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0914 15:11:39.114394    3333 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0914 15:11:39.141204    3333 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0914 15:11:39.193592    3333 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0914 15:11:39.353636    3333 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0914 15:11:39.399691    3333 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0914 15:11:39.563921    3333 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0914 15:11:39.563991    3333 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-438000 localhost] and IPs [192.168.105.6 127.0.0.1 ::1]
	I0914 15:11:39.680082    3333 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0914 15:11:39.680178    3333 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-438000 localhost] and IPs [192.168.105.6 127.0.0.1 ::1]
	I0914 15:11:39.723971    3333 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0914 15:11:39.789168    3333 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0914 15:11:39.846019    3333 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0914 15:11:39.846125    3333 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 15:11:39.892957    3333 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 15:11:40.026064    3333 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 15:11:40.363960    3333 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 15:11:40.450393    3333 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 15:11:40.450702    3333 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 15:11:40.454884    3333 out.go:204]   - Booting up control plane ...
	I0914 15:11:40.454951    3333 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 15:11:40.455062    3333 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 15:11:40.455112    3333 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 15:11:40.455362    3333 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 15:11:40.456613    3333 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0914 15:11:50.959697    3333 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.503075 seconds
	I0914 15:11:50.959778    3333 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0914 15:11:50.968919    3333 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I0914 15:11:51.486919    3333 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0914 15:11:51.487150    3333 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-438000 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0914 15:11:51.993590    3333 kubeadm.go:322] [bootstrap-token] Using token: er6bti.k1mbow0ikfoztooj
	I0914 15:11:51.997063    3333 out.go:204]   - Configuring RBAC rules ...
	I0914 15:11:51.997148    3333 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0914 15:11:51.997981    3333 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0914 15:11:52.028684    3333 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0914 15:11:52.030499    3333 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0914 15:11:52.032196    3333 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0914 15:11:52.033122    3333 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0914 15:11:52.038396    3333 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0914 15:11:52.228588    3333 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0914 15:11:52.404447    3333 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0914 15:11:52.404469    3333 kubeadm.go:322] 
	I0914 15:11:52.404511    3333 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0914 15:11:52.404522    3333 kubeadm.go:322] 
	I0914 15:11:52.404569    3333 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0914 15:11:52.404572    3333 kubeadm.go:322] 
	I0914 15:11:52.404588    3333 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0914 15:11:52.404625    3333 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0914 15:11:52.404658    3333 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0914 15:11:52.404666    3333 kubeadm.go:322] 
	I0914 15:11:52.404699    3333 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0914 15:11:52.404760    3333 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0914 15:11:52.404802    3333 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0914 15:11:52.404805    3333 kubeadm.go:322] 
	I0914 15:11:52.404863    3333 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0914 15:11:52.404907    3333 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0914 15:11:52.404911    3333 kubeadm.go:322] 
	I0914 15:11:52.404963    3333 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token er6bti.k1mbow0ikfoztooj \
	I0914 15:11:52.405072    3333 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:64a56fc8b6798e154cd69d815313de8f6cddb906dc6093ac20ba940ac7c1d871 \
	I0914 15:11:52.405092    3333 kubeadm.go:322]     --control-plane 
	I0914 15:11:52.405098    3333 kubeadm.go:322] 
	I0914 15:11:52.405167    3333 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0914 15:11:52.405173    3333 kubeadm.go:322] 
	I0914 15:11:52.405220    3333 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token er6bti.k1mbow0ikfoztooj \
	I0914 15:11:52.405294    3333 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:64a56fc8b6798e154cd69d815313de8f6cddb906dc6093ac20ba940ac7c1d871 
	I0914 15:11:52.405404    3333 kubeadm.go:322] W0914 22:11:39.077712    1419 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0914 15:11:52.405526    3333 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0914 15:11:52.405618    3333 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.6. Latest validated version: 19.03
	I0914 15:11:52.405690    3333 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 15:11:52.405773    3333 kubeadm.go:322] W0914 22:11:40.643820    1419 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0914 15:11:52.405852    3333 kubeadm.go:322] W0914 22:11:40.644318    1419 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0914 15:11:52.405859    3333 cni.go:84] Creating CNI manager for ""
	I0914 15:11:52.405866    3333 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0914 15:11:52.405883    3333 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 15:11:52.405951    3333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 15:11:52.405951    3333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=d7492f2ae2d9b6e62b385ffcd97ebad62c645e82 minikube.k8s.io/name=ingress-addon-legacy-438000 minikube.k8s.io/updated_at=2023_09_14T15_11_52_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 15:11:52.410742    3333 ops.go:34] apiserver oom_adj: -16
	I0914 15:11:52.479740    3333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 15:11:52.515692    3333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 15:11:53.052484    3333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 15:11:53.552665    3333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 15:11:54.052502    3333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 15:11:54.552426    3333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 15:11:55.052546    3333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 15:11:55.552525    3333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 15:11:56.052335    3333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 15:11:56.552568    3333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 15:11:57.052449    3333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 15:11:57.552419    3333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 15:11:58.052403    3333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 15:11:58.552317    3333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 15:11:59.052359    3333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 15:11:59.552416    3333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 15:12:00.052432    3333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 15:12:00.552455    3333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 15:12:01.051020    3333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 15:12:01.552410    3333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 15:12:02.052342    3333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 15:12:02.552036    3333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 15:12:03.052348    3333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 15:12:03.552110    3333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 15:12:04.052370    3333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 15:12:04.552271    3333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 15:12:05.052290    3333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 15:12:05.552064    3333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 15:12:06.052272    3333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 15:12:06.552327    3333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 15:12:07.052221    3333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 15:12:07.551983    3333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 15:12:07.613721    3333 kubeadm.go:1081] duration metric: took 15.208137583s to wait for elevateKubeSystemPrivileges.
	I0914 15:12:07.613739    3333 kubeadm.go:406] StartCluster complete in 28.766188792s
	I0914 15:12:07.613755    3333 settings.go:142] acquiring lock: {Name:mkcccc97e247e7e1b2e556ccc64336c05a92af53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 15:12:07.613844    3333 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/17243-1006/kubeconfig
	I0914 15:12:07.614216    3333 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17243-1006/kubeconfig: {Name:mkeec13fc5a79792669e9cedabfbe21efeb27d3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 15:12:07.614423    3333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0914 15:12:07.614443    3333 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0914 15:12:07.614480    3333 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-438000"
	I0914 15:12:07.614487    3333 addons.go:231] Setting addon storage-provisioner=true in "ingress-addon-legacy-438000"
	I0914 15:12:07.614510    3333 host.go:66] Checking if "ingress-addon-legacy-438000" exists ...
	I0914 15:12:07.614540    3333 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-438000"
	I0914 15:12:07.614552    3333 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-438000"
	I0914 15:12:07.614667    3333 config.go:182] Loaded profile config "ingress-addon-legacy-438000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0914 15:12:07.614673    3333 kapi.go:59] client config for ingress-addon-legacy-438000: &rest.Config{Host:"https://192.168.105.6:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/ingress-addon-legacy-438000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/ingress-addon-legacy-438000/client.key", CAFile:"/Users/jenkins/minikube-integration/17243-1006/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint
8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x105f1bf10), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0914 15:12:07.615036    3333 cert_rotation.go:137] Starting client certificate rotation controller
	I0914 15:12:07.615703    3333 kapi.go:59] client config for ingress-addon-legacy-438000: &rest.Config{Host:"https://192.168.105.6:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/ingress-addon-legacy-438000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/ingress-addon-legacy-438000/client.key", CAFile:"/Users/jenkins/minikube-integration/17243-1006/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint
8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x105f1bf10), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0914 15:12:07.620653    3333 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 15:12:07.624585    3333 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 15:12:07.624591    3333 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0914 15:12:07.624601    3333 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/ingress-addon-legacy-438000/id_rsa Username:docker}
	I0914 15:12:07.627916    3333 addons.go:231] Setting addon default-storageclass=true in "ingress-addon-legacy-438000"
	I0914 15:12:07.627932    3333 host.go:66] Checking if "ingress-addon-legacy-438000" exists ...
	I0914 15:12:07.628570    3333 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 15:12:07.628576    3333 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 15:12:07.628583    3333 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/ingress-addon-legacy-438000/id_rsa Username:docker}
	I0914 15:12:07.630815    3333 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-438000" context rescaled to 1 replicas
	I0914 15:12:07.630830    3333 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.6 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 15:12:07.634701    3333 out.go:177] * Verifying Kubernetes components...
	I0914 15:12:07.641727    3333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 15:12:07.663327    3333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0914 15:12:07.663588    3333 kapi.go:59] client config for ingress-addon-legacy-438000: &rest.Config{Host:"https://192.168.105.6:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/ingress-addon-legacy-438000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/ingress-addon-legacy-438000/client.key", CAFile:"/Users/jenkins/minikube-integration/17243-1006/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint
8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x105f1bf10), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0914 15:12:07.663753    3333 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-438000" to be "Ready" ...
	I0914 15:12:07.665242    3333 node_ready.go:49] node "ingress-addon-legacy-438000" has status "Ready":"True"
	I0914 15:12:07.665251    3333 node_ready.go:38] duration metric: took 1.483583ms waiting for node "ingress-addon-legacy-438000" to be "Ready" ...
	I0914 15:12:07.665254    3333 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 15:12:07.667806    3333 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-438000" in "kube-system" namespace to be "Ready" ...
	I0914 15:12:07.669826    3333 pod_ready.go:92] pod "etcd-ingress-addon-legacy-438000" in "kube-system" namespace has status "Ready":"True"
	I0914 15:12:07.669833    3333 pod_ready.go:81] duration metric: took 2.019167ms waiting for pod "etcd-ingress-addon-legacy-438000" in "kube-system" namespace to be "Ready" ...
	I0914 15:12:07.669840    3333 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-438000" in "kube-system" namespace to be "Ready" ...
	I0914 15:12:07.671848    3333 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-438000" in "kube-system" namespace has status "Ready":"True"
	I0914 15:12:07.671854    3333 pod_ready.go:81] duration metric: took 2.012125ms waiting for pod "kube-apiserver-ingress-addon-legacy-438000" in "kube-system" namespace to be "Ready" ...
	I0914 15:12:07.671859    3333 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-438000" in "kube-system" namespace to be "Ready" ...
	I0914 15:12:07.673806    3333 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-438000" in "kube-system" namespace has status "Ready":"True"
	I0914 15:12:07.673813    3333 pod_ready.go:81] duration metric: took 1.951084ms waiting for pod "kube-controller-manager-ingress-addon-legacy-438000" in "kube-system" namespace to be "Ready" ...
	I0914 15:12:07.673820    3333 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-438000" in "kube-system" namespace to be "Ready" ...
	I0914 15:12:07.682854    3333 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 15:12:07.703801    3333 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 15:12:07.862901    3333 start.go:917] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I0914 15:12:07.864124    3333 request.go:629] Waited for 188.459875ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-438000
	I0914 15:12:07.924006    3333 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0914 15:12:07.930900    3333 addons.go:502] enable addons completed in 316.463042ms: enabled=[storage-provisioner default-storageclass]
	I0914 15:12:08.064621    3333 request.go:629] Waited for 198.98325ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/nodes/ingress-addon-legacy-438000
	I0914 15:12:09.073800    3333 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-438000" in "kube-system" namespace has status "Ready":"True"
	I0914 15:12:09.073820    3333 pod_ready.go:81] duration metric: took 1.400024583s waiting for pod "kube-scheduler-ingress-addon-legacy-438000" in "kube-system" namespace to be "Ready" ...
	I0914 15:12:09.073829    3333 pod_ready.go:38] duration metric: took 1.408597084s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 15:12:09.073854    3333 api_server.go:52] waiting for apiserver process to appear ...
	I0914 15:12:09.074024    3333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 15:12:09.083560    3333 api_server.go:72] duration metric: took 1.45274675s to wait for apiserver process to appear ...
	I0914 15:12:09.083571    3333 api_server.go:88] waiting for apiserver healthz status ...
	I0914 15:12:09.083580    3333 api_server.go:253] Checking apiserver healthz at https://192.168.105.6:8443/healthz ...
	I0914 15:12:09.089669    3333 api_server.go:279] https://192.168.105.6:8443/healthz returned 200:
	ok
	I0914 15:12:09.090496    3333 api_server.go:141] control plane version: v1.18.20
	I0914 15:12:09.090507    3333 api_server.go:131] duration metric: took 6.931792ms to wait for apiserver health ...
	I0914 15:12:09.090512    3333 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 15:12:09.095256    3333 system_pods.go:59] 7 kube-system pods found
	I0914 15:12:09.095270    3333 system_pods.go:61] "coredns-66bff467f8-97f9b" [80f7d2ad-ab8a-4c77-83c4-e192edb18960] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 15:12:09.095279    3333 system_pods.go:61] "etcd-ingress-addon-legacy-438000" [3b030397-2349-4ff6-af4a-2111acf4ba99] Running
	I0914 15:12:09.095285    3333 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-438000" [eb3e29a8-e352-491d-9886-f5c94fdc949d] Running
	I0914 15:12:09.095289    3333 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-438000" [7b535422-b2b7-488c-8f3a-971d18bad609] Running
	I0914 15:12:09.095296    3333 system_pods.go:61] "kube-proxy-6qshd" [5482391d-7f25-4ce5-8fd8-5f86f8283b46] Running
	I0914 15:12:09.095302    3333 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-438000" [ea56a9de-df5b-4382-b886-e6ceaa36e836] Running
	I0914 15:12:09.095307    3333 system_pods.go:61] "storage-provisioner" [71c8739b-bcb1-48f5-91e5-89df56cc068c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0914 15:12:09.095314    3333 system_pods.go:74] duration metric: took 4.797625ms to wait for pod list to return data ...
	I0914 15:12:09.095318    3333 default_sa.go:34] waiting for default service account to be created ...
	I0914 15:12:09.265842    3333 request.go:629] Waited for 170.4685ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/namespaces/default/serviceaccounts
	I0914 15:12:09.268201    3333 default_sa.go:45] found service account: "default"
	I0914 15:12:09.268217    3333 default_sa.go:55] duration metric: took 172.895916ms for default service account to be created ...
	I0914 15:12:09.268225    3333 system_pods.go:116] waiting for k8s-apps to be running ...
	I0914 15:12:09.465880    3333 request.go:629] Waited for 197.572125ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/namespaces/kube-system/pods
	I0914 15:12:09.481186    3333 system_pods.go:86] 7 kube-system pods found
	I0914 15:12:09.481225    3333 system_pods.go:89] "coredns-66bff467f8-97f9b" [80f7d2ad-ab8a-4c77-83c4-e192edb18960] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 15:12:09.481244    3333 system_pods.go:89] "etcd-ingress-addon-legacy-438000" [3b030397-2349-4ff6-af4a-2111acf4ba99] Running
	I0914 15:12:09.481257    3333 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-438000" [eb3e29a8-e352-491d-9886-f5c94fdc949d] Running
	I0914 15:12:09.481268    3333 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-438000" [7b535422-b2b7-488c-8f3a-971d18bad609] Running
	I0914 15:12:09.481278    3333 system_pods.go:89] "kube-proxy-6qshd" [5482391d-7f25-4ce5-8fd8-5f86f8283b46] Running
	I0914 15:12:09.481286    3333 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-438000" [ea56a9de-df5b-4382-b886-e6ceaa36e836] Running
	I0914 15:12:09.481298    3333 system_pods.go:89] "storage-provisioner" [71c8739b-bcb1-48f5-91e5-89df56cc068c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0914 15:12:09.481310    3333 system_pods.go:126] duration metric: took 213.082792ms to wait for k8s-apps to be running ...
	I0914 15:12:09.481324    3333 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 15:12:09.481543    3333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 15:12:09.499782    3333 system_svc.go:56] duration metric: took 18.454083ms WaitForService to wait for kubelet.
	I0914 15:12:09.499802    3333 kubeadm.go:581] duration metric: took 1.868996875s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0914 15:12:09.499827    3333 node_conditions.go:102] verifying NodePressure condition ...
	I0914 15:12:09.664520    3333 request.go:629] Waited for 164.645125ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/nodes
	I0914 15:12:09.669134    3333 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0914 15:12:09.669157    3333 node_conditions.go:123] node cpu capacity is 2
	I0914 15:12:09.669172    3333 node_conditions.go:105] duration metric: took 169.342042ms to run NodePressure ...
	I0914 15:12:09.669187    3333 start.go:228] waiting for startup goroutines ...
	I0914 15:12:09.669197    3333 start.go:233] waiting for cluster config update ...
	I0914 15:12:09.669219    3333 start.go:242] writing updated cluster config ...
	I0914 15:12:09.669992    3333 ssh_runner.go:195] Run: rm -f paused
	I0914 15:12:09.719926    3333 start.go:600] kubectl: 1.27.2, cluster: 1.18.20 (minor skew: 9)
	I0914 15:12:09.724029    3333 out.go:177] 
	W0914 15:12:09.727086    3333 out.go:239] ! /usr/local/bin/kubectl is version 1.27.2, which may have incompatibilities with Kubernetes 1.18.20.
	I0914 15:12:09.730975    3333 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I0914 15:12:09.737992    3333 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-438000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Thu 2023-09-14 22:11:26 UTC, ends at Thu 2023-09-14 22:13:25 UTC. --
	Sep 14 22:12:56 ingress-addon-legacy-438000 dockerd[1082]: time="2023-09-14T22:12:56.461303489Z" level=warning msg="cleaning up after shim disconnected" id=97b84a7093f719428af3e8cef88cff34c705f0550b34bc64dbca3a5aed4adf6e namespace=moby
	Sep 14 22:12:56 ingress-addon-legacy-438000 dockerd[1075]: time="2023-09-14T22:12:56.461139121Z" level=info msg="ignoring event" container=97b84a7093f719428af3e8cef88cff34c705f0550b34bc64dbca3a5aed4adf6e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 14 22:12:56 ingress-addon-legacy-438000 dockerd[1082]: time="2023-09-14T22:12:56.461335613Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 14 22:13:08 ingress-addon-legacy-438000 dockerd[1075]: time="2023-09-14T22:13:08.875695100Z" level=info msg="ignoring event" container=0a2d4dc734a8ee84613d1527b8433b229ffdf5d82b54e64276ecf0a237536b71 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 14 22:13:08 ingress-addon-legacy-438000 dockerd[1082]: time="2023-09-14T22:13:08.875928886Z" level=info msg="shim disconnected" id=0a2d4dc734a8ee84613d1527b8433b229ffdf5d82b54e64276ecf0a237536b71 namespace=moby
	Sep 14 22:13:08 ingress-addon-legacy-438000 dockerd[1082]: time="2023-09-14T22:13:08.875972468Z" level=warning msg="cleaning up after shim disconnected" id=0a2d4dc734a8ee84613d1527b8433b229ffdf5d82b54e64276ecf0a237536b71 namespace=moby
	Sep 14 22:13:08 ingress-addon-legacy-438000 dockerd[1082]: time="2023-09-14T22:13:08.875979802Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 14 22:13:09 ingress-addon-legacy-438000 dockerd[1082]: time="2023-09-14T22:13:09.888979945Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 14 22:13:09 ingress-addon-legacy-438000 dockerd[1082]: time="2023-09-14T22:13:09.889028902Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 14 22:13:09 ingress-addon-legacy-438000 dockerd[1082]: time="2023-09-14T22:13:09.889390268Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 14 22:13:09 ingress-addon-legacy-438000 dockerd[1082]: time="2023-09-14T22:13:09.889409643Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 14 22:13:09 ingress-addon-legacy-438000 dockerd[1075]: time="2023-09-14T22:13:09.945834226Z" level=info msg="ignoring event" container=96e67aad4cb4b27d04589ec208c93383ec00fc07e7dde1bf0c589b8e46dd8fb6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 14 22:13:09 ingress-addon-legacy-438000 dockerd[1082]: time="2023-09-14T22:13:09.946054971Z" level=info msg="shim disconnected" id=96e67aad4cb4b27d04589ec208c93383ec00fc07e7dde1bf0c589b8e46dd8fb6 namespace=moby
	Sep 14 22:13:09 ingress-addon-legacy-438000 dockerd[1082]: time="2023-09-14T22:13:09.946082970Z" level=warning msg="cleaning up after shim disconnected" id=96e67aad4cb4b27d04589ec208c93383ec00fc07e7dde1bf0c589b8e46dd8fb6 namespace=moby
	Sep 14 22:13:09 ingress-addon-legacy-438000 dockerd[1082]: time="2023-09-14T22:13:09.946087179Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 14 22:13:20 ingress-addon-legacy-438000 dockerd[1075]: time="2023-09-14T22:13:20.342893310Z" level=info msg="Container failed to exit within 2s of signal 15 - using the force" container=2d8468f929dae3d59a0bb56e061a61ad6ca6c9710dd4b8d04913545ae85f7391
	Sep 14 22:13:20 ingress-addon-legacy-438000 dockerd[1075]: time="2023-09-14T22:13:20.349121403Z" level=info msg="Container failed to exit within 2s of signal 15 - using the force" container=2d8468f929dae3d59a0bb56e061a61ad6ca6c9710dd4b8d04913545ae85f7391
	Sep 14 22:13:20 ingress-addon-legacy-438000 dockerd[1075]: time="2023-09-14T22:13:20.416181660Z" level=info msg="ignoring event" container=2d8468f929dae3d59a0bb56e061a61ad6ca6c9710dd4b8d04913545ae85f7391 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 14 22:13:20 ingress-addon-legacy-438000 dockerd[1082]: time="2023-09-14T22:13:20.416379614Z" level=info msg="shim disconnected" id=2d8468f929dae3d59a0bb56e061a61ad6ca6c9710dd4b8d04913545ae85f7391 namespace=moby
	Sep 14 22:13:20 ingress-addon-legacy-438000 dockerd[1082]: time="2023-09-14T22:13:20.416663776Z" level=warning msg="cleaning up after shim disconnected" id=2d8468f929dae3d59a0bb56e061a61ad6ca6c9710dd4b8d04913545ae85f7391 namespace=moby
	Sep 14 22:13:20 ingress-addon-legacy-438000 dockerd[1082]: time="2023-09-14T22:13:20.416687609Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 14 22:13:20 ingress-addon-legacy-438000 dockerd[1075]: time="2023-09-14T22:13:20.457705390Z" level=info msg="ignoring event" container=d38ef5cb17225040b8558f49fef3ac9ced685ca3cb8548fe2b577db3d789fefd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 14 22:13:20 ingress-addon-legacy-438000 dockerd[1082]: time="2023-09-14T22:13:20.457873678Z" level=info msg="shim disconnected" id=d38ef5cb17225040b8558f49fef3ac9ced685ca3cb8548fe2b577db3d789fefd namespace=moby
	Sep 14 22:13:20 ingress-addon-legacy-438000 dockerd[1082]: time="2023-09-14T22:13:20.457966677Z" level=warning msg="cleaning up after shim disconnected" id=d38ef5cb17225040b8558f49fef3ac9ced685ca3cb8548fe2b577db3d789fefd namespace=moby
	Sep 14 22:13:20 ingress-addon-legacy-438000 dockerd[1082]: time="2023-09-14T22:13:20.457977593Z" level=info msg="cleaning up dead shim" namespace=moby
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID
	96e67aad4cb4b       a39a074194753                                                                                                      16 seconds ago       Exited              hello-world-app           2                   f34b78996eeeb
	9ee48e2d66831       nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70                                      41 seconds ago       Running             nginx                     0                   c486200cc1ff7
	2d8468f929dae       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   57 seconds ago       Exited              controller                0                   d38ef5cb17225
	f9444ff428b42       jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7               About a minute ago   Exited              patch                     0                   e3ed282a258bc
	64c65310f86de       jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7               About a minute ago   Exited              create                    0                   a2806486771ab
	c5cd91320c70d       gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944    About a minute ago   Running             storage-provisioner       0                   b3cafc76ff747
	4c6b27fca1ba7       565297bc6f7d4                                                                                                      About a minute ago   Running             kube-proxy                0                   df34858fc962b
	e207691c71d0b       6e17ba78cf3eb                                                                                                      About a minute ago   Running             coredns                   0                   f7ab08177ba35
	b59f36513d162       68a4fac29a865                                                                                                      About a minute ago   Running             kube-controller-manager   0                   0224b9bc5dcd1
	f0122b388b7f3       ab707b0a0ea33                                                                                                      About a minute ago   Running             etcd                      0                   f850fe831e538
	8789f172fde5f       095f37015706d                                                                                                      About a minute ago   Running             kube-scheduler            0                   7aec64311ceff
	00c38c5f61024       2694cf044d665                                                                                                      About a minute ago   Running             kube-apiserver            0                   4c75b44716aea
	
	* 
	* ==> coredns [e207691c71d0] <==
	* [INFO] 172.17.0.1:60224 - 63583 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000038374s
	[INFO] 172.17.0.1:60224 - 21400 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000036707s
	[INFO] 172.17.0.1:60224 - 11251 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000035748s
	[INFO] 172.17.0.1:60224 - 16581 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000048248s
	[INFO] 172.17.0.1:20159 - 26587 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000055414s
	[INFO] 172.17.0.1:20159 - 4621 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000013041s
	[INFO] 172.17.0.1:20159 - 17039 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000105746s
	[INFO] 172.17.0.1:20159 - 42927 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00003154s
	[INFO] 172.17.0.1:20159 - 26736 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000029416s
	[INFO] 172.17.0.1:20159 - 59680 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000012291s
	[INFO] 172.17.0.1:20159 - 44449 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000030748s
	[INFO] 172.17.0.1:35417 - 13257 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000038748s
	[INFO] 172.17.0.1:7939 - 58489 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000010791s
	[INFO] 172.17.0.1:35417 - 46647 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000026415s
	[INFO] 172.17.0.1:35417 - 55896 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000010083s
	[INFO] 172.17.0.1:35417 - 3032 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000017707s
	[INFO] 172.17.0.1:35417 - 15756 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000009875s
	[INFO] 172.17.0.1:7939 - 58869 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000042665s
	[INFO] 172.17.0.1:7939 - 47905 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000009s
	[INFO] 172.17.0.1:35417 - 5473 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000008792s
	[INFO] 172.17.0.1:35417 - 60380 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000023457s
	[INFO] 172.17.0.1:7939 - 31363 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000017791s
	[INFO] 172.17.0.1:7939 - 51162 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000013333s
	[INFO] 172.17.0.1:7939 - 37389 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000012875s
	[INFO] 172.17.0.1:7939 - 55763 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000015458s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-438000
	Roles:              master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ingress-addon-legacy-438000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d7492f2ae2d9b6e62b385ffcd97ebad62c645e82
	                    minikube.k8s.io/name=ingress-addon-legacy-438000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_14T15_11_52_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 14 Sep 2023 22:11:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-438000
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 14 Sep 2023 22:13:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 14 Sep 2023 22:12:58 +0000   Thu, 14 Sep 2023 22:11:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 14 Sep 2023 22:12:58 +0000   Thu, 14 Sep 2023 22:11:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 14 Sep 2023 22:12:58 +0000   Thu, 14 Sep 2023 22:11:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 14 Sep 2023 22:12:58 +0000   Thu, 14 Sep 2023 22:11:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.6
	  Hostname:    ingress-addon-legacy-438000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             4003124Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             4003124Ki
	  pods:               110
	System Info:
	  Machine ID:                 e22932da8e394377bcc9116d3efa27be
	  System UUID:                e22932da8e394377bcc9116d3efa27be
	  Boot ID:                    6f0a3028-9311-403b-b042-67b1e3822365
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.6
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-lxml5                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         33s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         45s
	  kube-system                 coredns-66bff467f8-97f9b                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     78s
	  kube-system                 etcd-ingress-addon-legacy-438000                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         87s
	  kube-system                 kube-apiserver-ingress-addon-legacy-438000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         87s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-438000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         87s
	  kube-system                 kube-proxy-6qshd                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         77s
	  kube-system                 kube-scheduler-ingress-addon-legacy-438000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         87s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         78s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             70Mi (1%!)(MISSING)   170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  Starting                 99s                kubelet     Starting kubelet.
	  Normal  NodeAllocatableEnforced  99s                kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  98s (x4 over 99s)  kubelet     Node ingress-addon-legacy-438000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    98s (x4 over 99s)  kubelet     Node ingress-addon-legacy-438000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     98s (x3 over 99s)  kubelet     Node ingress-addon-legacy-438000 status is now: NodeHasSufficientPID
	  Normal  Starting                 87s                kubelet     Starting kubelet.
	  Normal  NodeAllocatableEnforced  87s                kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  87s                kubelet     Node ingress-addon-legacy-438000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    87s                kubelet     Node ingress-addon-legacy-438000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     87s                kubelet     Node ingress-addon-legacy-438000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                87s                kubelet     Node ingress-addon-legacy-438000 status is now: NodeReady
	  Normal  Starting                 77s                kube-proxy  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [Sep14 22:11] ACPI: SRAT not present
	[  +0.000000] KASLR disabled due to lack of seed
	[  +0.650910] EINJ: EINJ table not found.
	[  +0.526538] systemd-fstab-generator[117]: Ignoring "noauto" for root device
	[  +0.044988] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000908] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +3.142115] systemd-fstab-generator[484]: Ignoring "noauto" for root device
	[  +0.086020] systemd-fstab-generator[495]: Ignoring "noauto" for root device
	[  +0.447133] systemd-fstab-generator[714]: Ignoring "noauto" for root device
	[  +0.164778] systemd-fstab-generator[750]: Ignoring "noauto" for root device
	[  +0.080862] systemd-fstab-generator[761]: Ignoring "noauto" for root device
	[  +0.075101] systemd-fstab-generator[774]: Ignoring "noauto" for root device
	[  +1.147612] kauditd_printk_skb: 17 callbacks suppressed
	[  +3.203543] systemd-fstab-generator[1068]: Ignoring "noauto" for root device
	[  +4.863948] systemd-fstab-generator[1534]: Ignoring "noauto" for root device
	[  +7.670414] kauditd_printk_skb: 68 callbacks suppressed
	[  +0.066462] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[  +5.284516] systemd-fstab-generator[2620]: Ignoring "noauto" for root device
	[Sep14 22:12] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.764012] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.657998] TCP: eth0: Driver has suspect GRO implementation, TCP performance may be compromised.
	[ +37.506026] kauditd_printk_skb: 3 callbacks suppressed
	
	* 
	* ==> etcd [f0122b388b7f] <==
	* raft2023/09/14 22:11:47 INFO: ed054832bd1917e1 became follower at term 0
	raft2023/09/14 22:11:47 INFO: newRaft ed054832bd1917e1 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	raft2023/09/14 22:11:47 INFO: ed054832bd1917e1 became follower at term 1
	raft2023/09/14 22:11:47 INFO: ed054832bd1917e1 switched to configuration voters=(17079136544630577121)
	2023-09-14 22:11:47.718783 W | auth: simple token is not cryptographically signed
	2023-09-14 22:11:47.722336 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2023-09-14 22:11:47.724174 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-09-14 22:11:47.724258 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-09-14 22:11:47.724533 I | embed: listening for peers on 192.168.105.6:2380
	2023-09-14 22:11:47.724633 I | etcdserver: ed054832bd1917e1 as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2023/09/14 22:11:47 INFO: ed054832bd1917e1 switched to configuration voters=(17079136544630577121)
	2023-09-14 22:11:47.724890 I | etcdserver/membership: added member ed054832bd1917e1 [https://192.168.105.6:2380] to cluster 45a39c2c59b0edf4
	raft2023/09/14 22:11:48 INFO: ed054832bd1917e1 is starting a new election at term 1
	raft2023/09/14 22:11:48 INFO: ed054832bd1917e1 became candidate at term 2
	raft2023/09/14 22:11:48 INFO: ed054832bd1917e1 received MsgVoteResp from ed054832bd1917e1 at term 2
	raft2023/09/14 22:11:48 INFO: ed054832bd1917e1 became leader at term 2
	raft2023/09/14 22:11:48 INFO: raft.node: ed054832bd1917e1 elected leader ed054832bd1917e1 at term 2
	2023-09-14 22:11:48.417800 I | etcdserver: setting up the initial cluster version to 3.4
	2023-09-14 22:11:48.418720 I | etcdserver: published {Name:ingress-addon-legacy-438000 ClientURLs:[https://192.168.105.6:2379]} to cluster 45a39c2c59b0edf4
	2023-09-14 22:11:48.418822 I | embed: ready to serve client requests
	2023-09-14 22:11:48.419572 I | embed: serving client requests on 192.168.105.6:2379
	2023-09-14 22:11:48.419623 I | embed: ready to serve client requests
	2023-09-14 22:11:48.420059 I | embed: serving client requests on 127.0.0.1:2379
	2023-09-14 22:11:48.452465 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-09-14 22:11:48.452538 I | etcdserver/api: enabled capabilities for version 3.4
	
	* 
	* ==> kernel <==
	*  22:13:25 up 2 min,  0 users,  load average: 1.04, 0.38, 0.14
	Linux ingress-addon-legacy-438000 5.10.57 #1 SMP PREEMPT Wed Sep 13 19:05:16 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [00c38c5f6102] <==
	* I0914 22:11:49.905627       1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
	I0914 22:11:49.986080       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0914 22:11:49.986283       1 cache.go:39] Caches are synced for autoregister controller
	I0914 22:11:49.986436       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0914 22:11:49.987103       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I0914 22:11:49.987454       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I0914 22:11:50.885593       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0914 22:11:50.885935       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0914 22:11:50.904758       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I0914 22:11:50.919719       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I0914 22:11:50.919752       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I0914 22:11:51.052122       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0914 22:11:51.062487       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0914 22:11:51.167179       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.105.6]
	I0914 22:11:51.167562       1 controller.go:609] quota admission added evaluator for: endpoints
	I0914 22:11:51.168828       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0914 22:11:52.217425       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I0914 22:11:52.413067       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I0914 22:11:52.588562       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I0914 22:11:58.773793       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I0914 22:12:07.886744       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I0914 22:12:08.091648       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I0914 22:12:10.124930       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I0914 22:12:40.792692       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	E0914 22:13:18.344651       1 authentication.go:53] Unable to authenticate the request due to an error: [invalid bearer token, Token has been invalidated]
	
	* 
	* ==> kube-controller-manager [b59f36513d16] <==
	* I0914 22:12:07.885497       1 shared_informer.go:230] Caches are synced for deployment 
	I0914 22:12:07.888425       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"ff59668f-66ca-46d1-b9e3-446ad121ae17", APIVersion:"apps/v1", ResourceVersion:"325", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-66bff467f8 to 1
	I0914 22:12:07.895494       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"1be575c7-fbb8-4f94-8dfb-74bf2a6f2b12", APIVersion:"apps/v1", ResourceVersion:"331", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-97f9b
	I0914 22:12:07.932853       1 shared_informer.go:230] Caches are synced for node 
	I0914 22:12:07.932879       1 range_allocator.go:172] Starting range CIDR allocator
	I0914 22:12:07.932882       1 shared_informer.go:223] Waiting for caches to sync for cidrallocator
	I0914 22:12:07.932883       1 shared_informer.go:230] Caches are synced for cidrallocator 
	I0914 22:12:07.935180       1 range_allocator.go:373] Set node ingress-addon-legacy-438000 PodCIDR to [10.244.0.0/24]
	I0914 22:12:08.089831       1 shared_informer.go:230] Caches are synced for daemon sets 
	I0914 22:12:08.094500       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"da6033e0-d4b5-4c2f-b300-67dfd3420396", APIVersion:"apps/v1", ResourceVersion:"221", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-6qshd
	I0914 22:12:08.188175       1 shared_informer.go:230] Caches are synced for stateful set 
	I0914 22:12:08.240902       1 shared_informer.go:230] Caches are synced for resource quota 
	I0914 22:12:08.280823       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0914 22:12:08.280834       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0914 22:12:08.286894       1 shared_informer.go:230] Caches are synced for resource quota 
	I0914 22:12:08.641149       1 shared_informer.go:223] Waiting for caches to sync for garbage collector
	I0914 22:12:08.641165       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0914 22:12:10.116380       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"d6b92987-ba9a-4aa4-84ae-3c6aa9085900", APIVersion:"apps/v1", ResourceVersion:"414", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I0914 22:12:10.128131       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"72ef3d80-35c0-4225-bd5c-2038c9f10b2a", APIVersion:"apps/v1", ResourceVersion:"415", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-vmrhb
	I0914 22:12:10.142041       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"41876226-b226-4609-9424-7cccda2660a7", APIVersion:"batch/v1", ResourceVersion:"419", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-rk2kw
	I0914 22:12:10.159633       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"441d0f6b-2fa9-49cf-92db-98198e8c55bc", APIVersion:"batch/v1", ResourceVersion:"426", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-w8k84
	I0914 22:12:12.909612       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"41876226-b226-4609-9424-7cccda2660a7", APIVersion:"batch/v1", ResourceVersion:"434", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0914 22:12:13.924085       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"441d0f6b-2fa9-49cf-92db-98198e8c55bc", APIVersion:"batch/v1", ResourceVersion:"439", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0914 22:12:52.076119       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"08f42b1f-6731-4f5b-8dba-f8b1dd8425a4", APIVersion:"apps/v1", ResourceVersion:"571", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I0914 22:12:52.084105       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"5a3219a7-34d7-4787-9f7f-ad3b8bdae6a5", APIVersion:"apps/v1", ResourceVersion:"572", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-lxml5
	
	* 
	* ==> kube-proxy [4c6b27fca1ba] <==
	* W0914 22:12:08.605156       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I0914 22:12:08.609345       1 node.go:136] Successfully retrieved node IP: 192.168.105.6
	I0914 22:12:08.609365       1 server_others.go:186] Using iptables Proxier.
	I0914 22:12:08.609521       1 server.go:583] Version: v1.18.20
	I0914 22:12:08.610676       1 config.go:315] Starting service config controller
	I0914 22:12:08.610728       1 shared_informer.go:223] Waiting for caches to sync for service config
	I0914 22:12:08.613517       1 config.go:133] Starting endpoints config controller
	I0914 22:12:08.613574       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I0914 22:12:08.713580       1 shared_informer.go:230] Caches are synced for service config 
	I0914 22:12:08.716552       1 shared_informer.go:230] Caches are synced for endpoints config 
	
	* 
	* ==> kube-scheduler [8789f172fde5] <==
	* W0914 22:11:49.935879       1 authentication.go:349] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0914 22:11:49.935951       1 authentication.go:297] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0914 22:11:49.935970       1 authentication.go:298] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0914 22:11:49.936001       1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0914 22:11:49.953883       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0914 22:11:49.954220       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0914 22:11:49.955407       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I0914 22:11:49.955555       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0914 22:11:49.955610       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0914 22:11:49.955652       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0914 22:11:49.957025       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0914 22:11:49.957792       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0914 22:11:49.957834       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0914 22:11:49.957875       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0914 22:11:49.957935       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0914 22:11:49.957977       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0914 22:11:49.958066       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0914 22:11:49.958091       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0914 22:11:49.958108       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0914 22:11:49.958125       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0914 22:11:49.958217       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0914 22:11:49.958238       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0914 22:11:50.887848       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0914 22:11:50.975298       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0914 22:11:53.155917       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Thu 2023-09-14 22:11:26 UTC, ends at Thu 2023-09-14 22:13:25 UTC. --
	Sep 14 22:13:02 ingress-addon-legacy-438000 kubelet[2626]: I0914 22:13:02.829009    2626 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: aa03a1a52c3c622edef0a0b987aacfab7414e5d3f892e11b54de78ee0dc14a05
	Sep 14 22:13:02 ingress-addon-legacy-438000 kubelet[2626]: E0914 22:13:02.830819    2626 pod_workers.go:191] Error syncing pod 3107e77f-e9cf-4744-9e51-03b4b0ee3396 ("kube-ingress-dns-minikube_kube-system(3107e77f-e9cf-4744-9e51-03b4b0ee3396)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with CrashLoopBackOff: "back-off 20s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(3107e77f-e9cf-4744-9e51-03b4b0ee3396)"
	Sep 14 22:13:07 ingress-addon-legacy-438000 kubelet[2626]: I0914 22:13:07.584000    2626 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-5s7wc" (UniqueName: "kubernetes.io/secret/3107e77f-e9cf-4744-9e51-03b4b0ee3396-minikube-ingress-dns-token-5s7wc") pod "3107e77f-e9cf-4744-9e51-03b4b0ee3396" (UID: "3107e77f-e9cf-4744-9e51-03b4b0ee3396")
	Sep 14 22:13:07 ingress-addon-legacy-438000 kubelet[2626]: I0914 22:13:07.590673    2626 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3107e77f-e9cf-4744-9e51-03b4b0ee3396-minikube-ingress-dns-token-5s7wc" (OuterVolumeSpecName: "minikube-ingress-dns-token-5s7wc") pod "3107e77f-e9cf-4744-9e51-03b4b0ee3396" (UID: "3107e77f-e9cf-4744-9e51-03b4b0ee3396"). InnerVolumeSpecName "minikube-ingress-dns-token-5s7wc". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Sep 14 22:13:07 ingress-addon-legacy-438000 kubelet[2626]: I0914 22:13:07.685438    2626 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-5s7wc" (UniqueName: "kubernetes.io/secret/3107e77f-e9cf-4744-9e51-03b4b0ee3396-minikube-ingress-dns-token-5s7wc") on node "ingress-addon-legacy-438000" DevicePath ""
	Sep 14 22:13:09 ingress-addon-legacy-438000 kubelet[2626]: I0914 22:13:09.613034    2626 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: aa03a1a52c3c622edef0a0b987aacfab7414e5d3f892e11b54de78ee0dc14a05
	Sep 14 22:13:09 ingress-addon-legacy-438000 kubelet[2626]: I0914 22:13:09.830599    2626 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 97b84a7093f719428af3e8cef88cff34c705f0550b34bc64dbca3a5aed4adf6e
	Sep 14 22:13:09 ingress-addon-legacy-438000 kubelet[2626]: W0914 22:13:09.958993    2626 container.go:412] Failed to create summary reader for "/kubepods/besteffort/podc4b62903-4598-44a6-881e-1c13b43c8ce4/96e67aad4cb4b27d04589ec208c93383ec00fc07e7dde1bf0c589b8e46dd8fb6": none of the resources are being tracked.
	Sep 14 22:13:10 ingress-addon-legacy-438000 kubelet[2626]: W0914 22:13:10.629546    2626 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-world-app-5f5d8b66bb-lxml5 through plugin: invalid network status for
	Sep 14 22:13:10 ingress-addon-legacy-438000 kubelet[2626]: I0914 22:13:10.635928    2626 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 97b84a7093f719428af3e8cef88cff34c705f0550b34bc64dbca3a5aed4adf6e
	Sep 14 22:13:10 ingress-addon-legacy-438000 kubelet[2626]: I0914 22:13:10.636770    2626 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 96e67aad4cb4b27d04589ec208c93383ec00fc07e7dde1bf0c589b8e46dd8fb6
	Sep 14 22:13:10 ingress-addon-legacy-438000 kubelet[2626]: E0914 22:13:10.637078    2626 pod_workers.go:191] Error syncing pod c4b62903-4598-44a6-881e-1c13b43c8ce4 ("hello-world-app-5f5d8b66bb-lxml5_default(c4b62903-4598-44a6-881e-1c13b43c8ce4)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-lxml5_default(c4b62903-4598-44a6-881e-1c13b43c8ce4)"
	Sep 14 22:13:11 ingress-addon-legacy-438000 kubelet[2626]: W0914 22:13:11.665873    2626 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-world-app-5f5d8b66bb-lxml5 through plugin: invalid network status for
	Sep 14 22:13:18 ingress-addon-legacy-438000 kubelet[2626]: E0914 22:13:18.338657    2626 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-vmrhb.1784e39733363feb", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-vmrhb", UID:"1f472a96-78f7-4016-910f-aef61e0d0bea", APIVersion:"v1", ResourceVersion:"421", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-438000"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc138ff3f9414d3eb, ext:85948627905, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc138ff3f9414d3eb, ext:85948627905, loc:(*time.Location)(0x6a0ef20)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-vmrhb.1784e39733363feb" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Sep 14 22:13:18 ingress-addon-legacy-438000 kubelet[2626]: E0914 22:13:18.349094    2626 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-vmrhb.1784e39733363feb", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-vmrhb", UID:"1f472a96-78f7-4016-910f-aef61e0d0bea", APIVersion:"v1", ResourceVersion:"421", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-438000"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc138ff3f9414d3eb, ext:85948627905, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc138ff3f947b1cf6, ext:85955331276, loc:(*time.Location)(0x6a0ef20)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-vmrhb.1784e39733363feb" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Sep 14 22:13:20 ingress-addon-legacy-438000 kubelet[2626]: I0914 22:13:20.540999    2626 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/1f472a96-78f7-4016-910f-aef61e0d0bea-webhook-cert") pod "1f472a96-78f7-4016-910f-aef61e0d0bea" (UID: "1f472a96-78f7-4016-910f-aef61e0d0bea")
	Sep 14 22:13:20 ingress-addon-legacy-438000 kubelet[2626]: I0914 22:13:20.541021    2626 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-5pvrg" (UniqueName: "kubernetes.io/secret/1f472a96-78f7-4016-910f-aef61e0d0bea-ingress-nginx-token-5pvrg") pod "1f472a96-78f7-4016-910f-aef61e0d0bea" (UID: "1f472a96-78f7-4016-910f-aef61e0d0bea")
	Sep 14 22:13:20 ingress-addon-legacy-438000 kubelet[2626]: I0914 22:13:20.545928    2626 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f472a96-78f7-4016-910f-aef61e0d0bea-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "1f472a96-78f7-4016-910f-aef61e0d0bea" (UID: "1f472a96-78f7-4016-910f-aef61e0d0bea"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Sep 14 22:13:20 ingress-addon-legacy-438000 kubelet[2626]: I0914 22:13:20.546035    2626 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f472a96-78f7-4016-910f-aef61e0d0bea-ingress-nginx-token-5pvrg" (OuterVolumeSpecName: "ingress-nginx-token-5pvrg") pod "1f472a96-78f7-4016-910f-aef61e0d0bea" (UID: "1f472a96-78f7-4016-910f-aef61e0d0bea"). InnerVolumeSpecName "ingress-nginx-token-5pvrg". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Sep 14 22:13:20 ingress-addon-legacy-438000 kubelet[2626]: I0914 22:13:20.641184    2626 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/1f472a96-78f7-4016-910f-aef61e0d0bea-webhook-cert") on node "ingress-addon-legacy-438000" DevicePath ""
	Sep 14 22:13:20 ingress-addon-legacy-438000 kubelet[2626]: I0914 22:13:20.641212    2626 reconciler.go:319] Volume detached for volume "ingress-nginx-token-5pvrg" (UniqueName: "kubernetes.io/secret/1f472a96-78f7-4016-910f-aef61e0d0bea-ingress-nginx-token-5pvrg") on node "ingress-addon-legacy-438000" DevicePath ""
	Sep 14 22:13:20 ingress-addon-legacy-438000 kubelet[2626]: W0914 22:13:20.790325    2626 pod_container_deletor.go:77] Container "d38ef5cb17225040b8558f49fef3ac9ced685ca3cb8548fe2b577db3d789fefd" not found in pod's containers
	Sep 14 22:13:20 ingress-addon-legacy-438000 kubelet[2626]: W0914 22:13:20.837750    2626 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/1f472a96-78f7-4016-910f-aef61e0d0bea/volumes" does not exist
	Sep 14 22:13:22 ingress-addon-legacy-438000 kubelet[2626]: I0914 22:13:22.829274    2626 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 96e67aad4cb4b27d04589ec208c93383ec00fc07e7dde1bf0c589b8e46dd8fb6
	Sep 14 22:13:22 ingress-addon-legacy-438000 kubelet[2626]: E0914 22:13:22.829940    2626 pod_workers.go:191] Error syncing pod c4b62903-4598-44a6-881e-1c13b43c8ce4 ("hello-world-app-5f5d8b66bb-lxml5_default(c4b62903-4598-44a6-881e-1c13b43c8ce4)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-lxml5_default(c4b62903-4598-44a6-881e-1c13b43c8ce4)"
	
	* 
	* ==> storage-provisioner [c5cd91320c70] <==
	* I0914 22:12:10.380187       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0914 22:12:10.384831       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0914 22:12:10.384867       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0914 22:12:10.387533       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0914 22:12:10.387795       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-438000_67723bd7-6064-4f01-8384-9ef4d2e95813!
	I0914 22:12:10.387836       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"11da6fca-4a60-413d-8554-b321413d61d0", APIVersion:"v1", ResourceVersion:"444", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-438000_67723bd7-6064-4f01-8384-9ef4d2e95813 became leader
	I0914 22:12:10.488062       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-438000_67723bd7-6064-4f01-8384-9ef4d2e95813!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p ingress-addon-legacy-438000 -n ingress-addon-legacy-438000
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-438000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (56.32s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.59s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-032000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-032000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (10.514955125s)

                                                
                                                
-- stdout --
	* [mount-start-1-032000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17243
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17243-1006/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17243-1006/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-032000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-032000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-032000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-032000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-032000 -n mount-start-1-032000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-032000 -n mount-start-1-032000: exit status 7 (69.4915ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-032000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.59s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (10.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-463000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:85: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-463000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (10.322877541s)

                                                
                                                
-- stdout --
	* [multinode-463000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17243
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17243-1006/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17243-1006/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node multinode-463000 in cluster multinode-463000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-463000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 15:15:42.014395    3689 out.go:296] Setting OutFile to fd 1 ...
	I0914 15:15:42.014542    3689 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 15:15:42.014545    3689 out.go:309] Setting ErrFile to fd 2...
	I0914 15:15:42.014548    3689 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 15:15:42.014695    3689 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17243-1006/.minikube/bin
	I0914 15:15:42.015713    3689 out.go:303] Setting JSON to false
	I0914 15:15:42.030707    3689 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2716,"bootTime":1694727026,"procs":408,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0914 15:15:42.030787    3689 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0914 15:15:42.035793    3689 out.go:177] * [multinode-463000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0914 15:15:42.042752    3689 out.go:177]   - MINIKUBE_LOCATION=17243
	I0914 15:15:42.042819    3689 notify.go:220] Checking for updates...
	I0914 15:15:42.046774    3689 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17243-1006/kubeconfig
	I0914 15:15:42.049701    3689 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 15:15:42.052768    3689 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 15:15:42.055763    3689 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17243-1006/.minikube
	I0914 15:15:42.058701    3689 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 15:15:42.061921    3689 driver.go:373] Setting default libvirt URI to qemu:///system
	I0914 15:15:42.065757    3689 out.go:177] * Using the qemu2 driver based on user configuration
	I0914 15:15:42.072721    3689 start.go:298] selected driver: qemu2
	I0914 15:15:42.072726    3689 start.go:902] validating driver "qemu2" against <nil>
	I0914 15:15:42.072732    3689 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 15:15:42.074678    3689 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0914 15:15:42.077710    3689 out.go:177] * Automatically selected the socket_vmnet network
	I0914 15:15:42.079132    3689 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 15:15:42.079162    3689 cni.go:84] Creating CNI manager for ""
	I0914 15:15:42.079169    3689 cni.go:136] 0 nodes found, recommending kindnet
	I0914 15:15:42.079172    3689 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0914 15:15:42.079178    3689 start_flags.go:321] config:
	{Name:multinode-463000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:multinode-463000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:doc
ker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAge
ntPID:0 AutoPauseInterval:1m0s}
	I0914 15:15:42.083248    3689 iso.go:125] acquiring lock: {Name:mkd5b68b252c18146a825ff5365883948b3c6983 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 15:15:42.090746    3689 out.go:177] * Starting control plane node multinode-463000 in cluster multinode-463000
	I0914 15:15:42.094708    3689 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0914 15:15:42.094727    3689 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0914 15:15:42.094736    3689 cache.go:57] Caching tarball of preloaded images
	I0914 15:15:42.094815    3689 preload.go:174] Found /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0914 15:15:42.094821    3689 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0914 15:15:42.095077    3689 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/multinode-463000/config.json ...
	I0914 15:15:42.095090    3689 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/multinode-463000/config.json: {Name:mk56049112398004899eb67ace7f77e40a6bd167 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 15:15:42.095307    3689 start.go:365] acquiring machines lock for multinode-463000: {Name:mk831f2bd5a9a0c932c8a59972319d68edb53d07 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 15:15:42.095339    3689 start.go:369] acquired machines lock for "multinode-463000" in 25.75µs
	I0914 15:15:42.095354    3689 start.go:93] Provisioning new machine with config: &{Name:multinode-463000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.28.1 ClusterName:multinode-463000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSiz
e:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 15:15:42.095392    3689 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 15:15:42.103655    3689 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0914 15:15:42.119682    3689 start.go:159] libmachine.API.Create for "multinode-463000" (driver="qemu2")
	I0914 15:15:42.119710    3689 client.go:168] LocalClient.Create starting
	I0914 15:15:42.119771    3689 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/ca.pem
	I0914 15:15:42.119795    3689 main.go:141] libmachine: Decoding PEM data...
	I0914 15:15:42.119812    3689 main.go:141] libmachine: Parsing certificate...
	I0914 15:15:42.119853    3689 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/cert.pem
	I0914 15:15:42.119872    3689 main.go:141] libmachine: Decoding PEM data...
	I0914 15:15:42.119887    3689 main.go:141] libmachine: Parsing certificate...
	I0914 15:15:42.120213    3689 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17243-1006/.minikube/cache/iso/arm64/minikube-v1.31.0-1694625400-17243-arm64.iso...
	I0914 15:15:42.359393    3689 main.go:141] libmachine: Creating SSH key...
	I0914 15:15:42.458947    3689 main.go:141] libmachine: Creating Disk image...
	I0914 15:15:42.458957    3689 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 15:15:42.459080    3689 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/multinode-463000/disk.qcow2.raw /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/multinode-463000/disk.qcow2
	I0914 15:15:42.467579    3689 main.go:141] libmachine: STDOUT: 
	I0914 15:15:42.467594    3689 main.go:141] libmachine: STDERR: 
	I0914 15:15:42.467643    3689 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/multinode-463000/disk.qcow2 +20000M
	I0914 15:15:42.474665    3689 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 15:15:42.474677    3689 main.go:141] libmachine: STDERR: 
	I0914 15:15:42.474692    3689 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/multinode-463000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/multinode-463000/disk.qcow2
	I0914 15:15:42.474697    3689 main.go:141] libmachine: Starting QEMU VM...
	I0914 15:15:42.474737    3689 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/multinode-463000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/multinode-463000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/multinode-463000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:06:3e:60:fa:57 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/multinode-463000/disk.qcow2
	I0914 15:15:42.476201    3689 main.go:141] libmachine: STDOUT: 
	I0914 15:15:42.476213    3689 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 15:15:42.476228    3689 client.go:171] LocalClient.Create took 356.522792ms
	I0914 15:15:44.478357    3689 start.go:128] duration metric: createHost completed in 2.3829985s
	I0914 15:15:44.478416    3689 start.go:83] releasing machines lock for "multinode-463000", held for 2.38311875s
	W0914 15:15:44.478474    3689 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 15:15:44.489694    3689 out.go:177] * Deleting "multinode-463000" in qemu2 ...
	W0914 15:15:44.511397    3689 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 15:15:44.511437    3689 start.go:703] Will try again in 5 seconds ...
	I0914 15:15:49.513536    3689 start.go:365] acquiring machines lock for multinode-463000: {Name:mk831f2bd5a9a0c932c8a59972319d68edb53d07 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 15:15:49.513990    3689 start.go:369] acquired machines lock for "multinode-463000" in 366.709µs
	I0914 15:15:49.514115    3689 start.go:93] Provisioning new machine with config: &{Name:multinode-463000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.28.1 ClusterName:multinode-463000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSiz
e:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 15:15:49.514367    3689 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 15:15:49.521972    3689 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0914 15:15:49.567026    3689 start.go:159] libmachine.API.Create for "multinode-463000" (driver="qemu2")
	I0914 15:15:49.567071    3689 client.go:168] LocalClient.Create starting
	I0914 15:15:49.567193    3689 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/ca.pem
	I0914 15:15:49.567246    3689 main.go:141] libmachine: Decoding PEM data...
	I0914 15:15:49.567268    3689 main.go:141] libmachine: Parsing certificate...
	I0914 15:15:49.567350    3689 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/cert.pem
	I0914 15:15:49.567390    3689 main.go:141] libmachine: Decoding PEM data...
	I0914 15:15:49.567409    3689 main.go:141] libmachine: Parsing certificate...
	I0914 15:15:49.567911    3689 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17243-1006/.minikube/cache/iso/arm64/minikube-v1.31.0-1694625400-17243-arm64.iso...
	I0914 15:15:49.790894    3689 main.go:141] libmachine: Creating SSH key...
	I0914 15:15:50.250309    3689 main.go:141] libmachine: Creating Disk image...
	I0914 15:15:50.250328    3689 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 15:15:50.250526    3689 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/multinode-463000/disk.qcow2.raw /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/multinode-463000/disk.qcow2
	I0914 15:15:50.259788    3689 main.go:141] libmachine: STDOUT: 
	I0914 15:15:50.259820    3689 main.go:141] libmachine: STDERR: 
	I0914 15:15:50.259879    3689 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/multinode-463000/disk.qcow2 +20000M
	I0914 15:15:50.267057    3689 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 15:15:50.267077    3689 main.go:141] libmachine: STDERR: 
	I0914 15:15:50.267091    3689 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/multinode-463000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/multinode-463000/disk.qcow2
	I0914 15:15:50.267100    3689 main.go:141] libmachine: Starting QEMU VM...
	I0914 15:15:50.267145    3689 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/multinode-463000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/multinode-463000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/multinode-463000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:20:28:b6:76:7d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/multinode-463000/disk.qcow2
	I0914 15:15:50.268750    3689 main.go:141] libmachine: STDOUT: 
	I0914 15:15:50.268775    3689 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 15:15:50.268789    3689 client.go:171] LocalClient.Create took 701.726041ms
	I0914 15:15:52.270917    3689 start.go:128] duration metric: createHost completed in 2.756560333s
	I0914 15:15:52.270981    3689 start.go:83] releasing machines lock for "multinode-463000", held for 2.757029084s
	W0914 15:15:52.271239    3689 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-463000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-463000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 15:15:52.279104    3689 out.go:177] 
	W0914 15:15:52.283101    3689 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 15:15:52.283126    3689 out.go:239] * 
	* 
	W0914 15:15:52.284406    3689 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 15:15:52.294985    3689 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:87: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-463000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-463000 -n multinode-463000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-463000 -n multinode-463000: exit status 7 (71.115834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-463000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (10.40s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (115.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-463000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:481: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-463000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (127.5245ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-463000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:483: failed to create busybox deployment to multinode cluster
multinode_test.go:486: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-463000 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-463000 -- rollout status deployment/busybox: exit status 1 (55.169916ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-463000"

                                                
                                                
** /stderr **
multinode_test.go:488: failed to deploy busybox to multinode cluster
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-463000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-463000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (56.036625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-463000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-463000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-463000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.349583ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-463000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-463000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-463000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.606125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-463000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-463000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-463000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.292083ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-463000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-463000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-463000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.196208ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-463000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-463000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-463000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.726709ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-463000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-463000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-463000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.753583ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-463000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-463000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-463000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.661041ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-463000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
E0914 15:16:32.779942    1425 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/functional-398000/client.crt: no such file or directory
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-463000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-463000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.307083ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-463000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
E0914 15:17:00.481900    1425 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/functional-398000/client.crt: no such file or directory
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-463000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-463000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.527375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-463000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
E0914 15:17:29.385799    1425 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/ingress-addon-legacy-438000/client.crt: no such file or directory
E0914 15:17:29.391425    1425 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/ingress-addon-legacy-438000/client.crt: no such file or directory
E0914 15:17:29.403553    1425 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/ingress-addon-legacy-438000/client.crt: no such file or directory
E0914 15:17:29.425680    1425 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/ingress-addon-legacy-438000/client.crt: no such file or directory
E0914 15:17:29.467788    1425 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/ingress-addon-legacy-438000/client.crt: no such file or directory
E0914 15:17:29.550047    1425 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/ingress-addon-legacy-438000/client.crt: no such file or directory
E0914 15:17:29.710516    1425 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/ingress-addon-legacy-438000/client.crt: no such file or directory
E0914 15:17:30.032730    1425 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/ingress-addon-legacy-438000/client.crt: no such file or directory
E0914 15:17:30.675263    1425 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/ingress-addon-legacy-438000/client.crt: no such file or directory
E0914 15:17:31.957627    1425 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/ingress-addon-legacy-438000/client.crt: no such file or directory
E0914 15:17:34.518103    1425 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/ingress-addon-legacy-438000/client.crt: no such file or directory
E0914 15:17:39.638538    1425 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/ingress-addon-legacy-438000/client.crt: no such file or directory
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-463000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-463000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.721708ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-463000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:512: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:516: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-463000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:516: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-463000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (54.234625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-463000"

                                                
                                                
** /stderr **
multinode_test.go:518: failed get Pod names
multinode_test.go:524: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-463000 -- exec  -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-463000 -- exec  -- nslookup kubernetes.io: exit status 1 (54.353084ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-463000"

                                                
                                                
** /stderr **
multinode_test.go:526: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-463000 -- exec  -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-463000 -- exec  -- nslookup kubernetes.default: exit status 1 (54.585833ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-463000"

                                                
                                                
** /stderr **
multinode_test.go:536: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:542: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-463000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-463000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (54.504041ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-463000"

                                                
                                                
** /stderr **
multinode_test.go:544: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-463000 -n multinode-463000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-463000 -n multinode-463000: exit status 7 (29.271458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-463000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (115.24s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-463000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-463000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (55.067792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-463000"

                                                
                                                
** /stderr **
multinode_test.go:554: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-463000 -n multinode-463000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-463000 -n multinode-463000: exit status 7 (29.526583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-463000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-463000 -v 3 --alsologtostderr
multinode_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-463000 -v 3 --alsologtostderr: exit status 89 (40.168333ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p multinode-463000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 15:17:47.728294    3798 out.go:296] Setting OutFile to fd 1 ...
	I0914 15:17:47.728507    3798 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 15:17:47.728510    3798 out.go:309] Setting ErrFile to fd 2...
	I0914 15:17:47.728512    3798 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 15:17:47.728637    3798 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17243-1006/.minikube/bin
	I0914 15:17:47.728886    3798 mustload.go:65] Loading cluster: multinode-463000
	I0914 15:17:47.729077    3798 config.go:182] Loaded profile config "multinode-463000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0914 15:17:47.734366    3798 out.go:177] * The control plane node must be running for this command
	I0914 15:17:47.737315    3798 out.go:177]   To start a cluster, run: "minikube start -p multinode-463000"

                                                
                                                
** /stderr **
multinode_test.go:112: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-463000 -v 3 --alsologtostderr" : exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-463000 -n multinode-463000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-463000 -n multinode-463000: exit status 7 (28.919917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-463000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:155: expected profile "multinode-463000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-463000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-463000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"VMDriver\":\"\",\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidd
en\":false,\"KVMNUMACount\":1,\"APIServerPort\":0,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.28.1\",\"ClusterName\":\"multinode-463000\",\"Namespace\":\"default\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\",\"NodeIP\":\"\",\"NodePort\":8443,\"NodeName\":\"\"},\"Nodes\":[{\"Name\":\"\
",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.28.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPat
h\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"AutoPauseInterval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-463000 -n multinode-463000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-463000 -n multinode-463000: exit status 7 (29.430208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-463000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-463000 status --output json --alsologtostderr
multinode_test.go:173: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-463000 status --output json --alsologtostderr: exit status 7 (29.040333ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-463000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 15:17:47.898619    3808 out.go:296] Setting OutFile to fd 1 ...
	I0914 15:17:47.898775    3808 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 15:17:47.898778    3808 out.go:309] Setting ErrFile to fd 2...
	I0914 15:17:47.898781    3808 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 15:17:47.898928    3808 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17243-1006/.minikube/bin
	I0914 15:17:47.899054    3808 out.go:303] Setting JSON to true
	I0914 15:17:47.899066    3808 mustload.go:65] Loading cluster: multinode-463000
	I0914 15:17:47.899137    3808 notify.go:220] Checking for updates...
	I0914 15:17:47.899275    3808 config.go:182] Loaded profile config "multinode-463000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0914 15:17:47.899279    3808 status.go:255] checking status of multinode-463000 ...
	I0914 15:17:47.899513    3808 status.go:330] multinode-463000 host status = "Stopped" (err=<nil>)
	I0914 15:17:47.899516    3808 status.go:343] host is not running, skipping remaining checks
	I0914 15:17:47.899518    3808 status.go:257] multinode-463000 status: &{Name:multinode-463000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:180: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-463000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-463000 -n multinode-463000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-463000 -n multinode-463000: exit status 7 (28.744125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-463000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-463000 node stop m03
multinode_test.go:210: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-463000 node stop m03: exit status 85 (45.939792ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:212: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-463000 node stop m03": exit status 85
multinode_test.go:216: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-463000 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-463000 status: exit status 7 (29.342791ms)

                                                
                                                
-- stdout --
	multinode-463000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-463000 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-463000 status --alsologtostderr: exit status 7 (29.241708ms)

                                                
                                                
-- stdout --
	multinode-463000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 15:17:48.032690    3816 out.go:296] Setting OutFile to fd 1 ...
	I0914 15:17:48.032863    3816 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 15:17:48.032866    3816 out.go:309] Setting ErrFile to fd 2...
	I0914 15:17:48.032869    3816 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 15:17:48.033008    3816 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17243-1006/.minikube/bin
	I0914 15:17:48.033140    3816 out.go:303] Setting JSON to false
	I0914 15:17:48.033151    3816 mustload.go:65] Loading cluster: multinode-463000
	I0914 15:17:48.033229    3816 notify.go:220] Checking for updates...
	I0914 15:17:48.033359    3816 config.go:182] Loaded profile config "multinode-463000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0914 15:17:48.033363    3816 status.go:255] checking status of multinode-463000 ...
	I0914 15:17:48.033579    3816 status.go:330] multinode-463000 host status = "Stopped" (err=<nil>)
	I0914 15:17:48.033583    3816 status.go:343] host is not running, skipping remaining checks
	I0914 15:17:48.033585    3816 status.go:257] multinode-463000 status: &{Name:multinode-463000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:229: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-463000 status --alsologtostderr": multinode-463000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-463000 -n multinode-463000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-463000 -n multinode-463000: exit status 7 (29.302208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-463000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.13s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (0.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-463000 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-463000 node start m03 --alsologtostderr: exit status 85 (46.529333ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 15:17:48.091165    3820 out.go:296] Setting OutFile to fd 1 ...
	I0914 15:17:48.091394    3820 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 15:17:48.091397    3820 out.go:309] Setting ErrFile to fd 2...
	I0914 15:17:48.091400    3820 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 15:17:48.091537    3820 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17243-1006/.minikube/bin
	I0914 15:17:48.091789    3820 mustload.go:65] Loading cluster: multinode-463000
	I0914 15:17:48.091971    3820 config.go:182] Loaded profile config "multinode-463000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0914 15:17:48.096836    3820 out.go:177] 
	W0914 15:17:48.100657    3820 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0914 15:17:48.100662    3820 out.go:239] * 
	* 
	W0914 15:17:48.102343    3820 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 15:17:48.105687    3820 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:256: I0914 15:17:48.091165    3820 out.go:296] Setting OutFile to fd 1 ...
I0914 15:17:48.091394    3820 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0914 15:17:48.091397    3820 out.go:309] Setting ErrFile to fd 2...
I0914 15:17:48.091400    3820 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0914 15:17:48.091537    3820 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17243-1006/.minikube/bin
I0914 15:17:48.091789    3820 mustload.go:65] Loading cluster: multinode-463000
I0914 15:17:48.091971    3820 config.go:182] Loaded profile config "multinode-463000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0914 15:17:48.096836    3820 out.go:177] 
W0914 15:17:48.100657    3820 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0914 15:17:48.100662    3820 out.go:239] * 
* 
W0914 15:17:48.102343    3820 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0914 15:17:48.105687    3820 out.go:177] 
multinode_test.go:257: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-463000 node start m03 --alsologtostderr": exit status 85
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-463000 status
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-463000 status: exit status 7 (30.426375ms)

                                                
                                                
-- stdout --
	multinode-463000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:263: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-463000 status" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-463000 -n multinode-463000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-463000 -n multinode-463000: exit status 7 (30.160041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-463000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (0.11s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (5.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-463000
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-463000
multinode_test.go:295: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-463000 --wait=true -v=8 --alsologtostderr
E0914 15:17:49.880887    1425 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/ingress-addon-legacy-438000/client.crt: no such file or directory
multinode_test.go:295: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-463000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.175852959s)

                                                
                                                
-- stdout --
	* [multinode-463000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17243
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17243-1006/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17243-1006/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node multinode-463000 in cluster multinode-463000
	* Restarting existing qemu2 VM for "multinode-463000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-463000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 15:17:48.299729    3830 out.go:296] Setting OutFile to fd 1 ...
	I0914 15:17:48.299938    3830 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 15:17:48.299942    3830 out.go:309] Setting ErrFile to fd 2...
	I0914 15:17:48.299944    3830 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 15:17:48.300098    3830 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17243-1006/.minikube/bin
	I0914 15:17:48.301209    3830 out.go:303] Setting JSON to false
	I0914 15:17:48.316717    3830 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2842,"bootTime":1694727026,"procs":405,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0914 15:17:48.316790    3830 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0914 15:17:48.320887    3830 out.go:177] * [multinode-463000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0914 15:17:48.327775    3830 out.go:177]   - MINIKUBE_LOCATION=17243
	I0914 15:17:48.327837    3830 notify.go:220] Checking for updates...
	I0914 15:17:48.331774    3830 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17243-1006/kubeconfig
	I0914 15:17:48.334717    3830 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 15:17:48.338724    3830 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 15:17:48.341763    3830 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17243-1006/.minikube
	I0914 15:17:48.344677    3830 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 15:17:48.348003    3830 config.go:182] Loaded profile config "multinode-463000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0914 15:17:48.348070    3830 driver.go:373] Setting default libvirt URI to qemu:///system
	I0914 15:17:48.352706    3830 out.go:177] * Using the qemu2 driver based on existing profile
	I0914 15:17:48.359734    3830 start.go:298] selected driver: qemu2
	I0914 15:17:48.359738    3830 start.go:902] validating driver "qemu2" against &{Name:multinode-463000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.1 ClusterName:multinode-463000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 15:17:48.359797    3830 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 15:17:48.361682    3830 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 15:17:48.361708    3830 cni.go:84] Creating CNI manager for ""
	I0914 15:17:48.361713    3830 cni.go:136] 1 nodes found, recommending kindnet
	I0914 15:17:48.361717    3830 start_flags.go:321] config:
	{Name:multinode-463000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:multinode-463000 Namespace:default API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 15:17:48.365844    3830 iso.go:125] acquiring lock: {Name:mkd5b68b252c18146a825ff5365883948b3c6983 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 15:17:48.372734    3830 out.go:177] * Starting control plane node multinode-463000 in cluster multinode-463000
	I0914 15:17:48.376769    3830 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0914 15:17:48.376788    3830 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0914 15:17:48.376805    3830 cache.go:57] Caching tarball of preloaded images
	I0914 15:17:48.376890    3830 preload.go:174] Found /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0914 15:17:48.376896    3830 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0914 15:17:48.376968    3830 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/multinode-463000/config.json ...
	I0914 15:17:48.377329    3830 start.go:365] acquiring machines lock for multinode-463000: {Name:mk831f2bd5a9a0c932c8a59972319d68edb53d07 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 15:17:48.377359    3830 start.go:369] acquired machines lock for "multinode-463000" in 24.875µs
	I0914 15:17:48.377373    3830 start.go:96] Skipping create...Using existing machine configuration
	I0914 15:17:48.377378    3830 fix.go:54] fixHost starting: 
	I0914 15:17:48.377497    3830 fix.go:102] recreateIfNeeded on multinode-463000: state=Stopped err=<nil>
	W0914 15:17:48.377505    3830 fix.go:128] unexpected machine state, will restart: <nil>
	I0914 15:17:48.380698    3830 out.go:177] * Restarting existing qemu2 VM for "multinode-463000" ...
	I0914 15:17:48.388800    3830 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/multinode-463000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/multinode-463000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/multinode-463000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:20:28:b6:76:7d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/multinode-463000/disk.qcow2
	I0914 15:17:48.390713    3830 main.go:141] libmachine: STDOUT: 
	I0914 15:17:48.390737    3830 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 15:17:48.390764    3830 fix.go:56] fixHost completed within 13.385583ms
	I0914 15:17:48.390770    3830 start.go:83] releasing machines lock for "multinode-463000", held for 13.406334ms
	W0914 15:17:48.390777    3830 start.go:688] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 15:17:48.390816    3830 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 15:17:48.390821    3830 start.go:703] Will try again in 5 seconds ...
	I0914 15:17:53.392153    3830 start.go:365] acquiring machines lock for multinode-463000: {Name:mk831f2bd5a9a0c932c8a59972319d68edb53d07 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 15:17:53.392529    3830 start.go:369] acquired machines lock for "multinode-463000" in 273.042µs
	I0914 15:17:53.392660    3830 start.go:96] Skipping create...Using existing machine configuration
	I0914 15:17:53.392683    3830 fix.go:54] fixHost starting: 
	I0914 15:17:53.393431    3830 fix.go:102] recreateIfNeeded on multinode-463000: state=Stopped err=<nil>
	W0914 15:17:53.393465    3830 fix.go:128] unexpected machine state, will restart: <nil>
	I0914 15:17:53.398978    3830 out.go:177] * Restarting existing qemu2 VM for "multinode-463000" ...
	I0914 15:17:53.403098    3830 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/multinode-463000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/multinode-463000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/multinode-463000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:20:28:b6:76:7d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/multinode-463000/disk.qcow2
	I0914 15:17:53.411768    3830 main.go:141] libmachine: STDOUT: 
	I0914 15:17:53.411820    3830 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 15:17:53.411896    3830 fix.go:56] fixHost completed within 19.217875ms
	I0914 15:17:53.411915    3830 start.go:83] releasing machines lock for "multinode-463000", held for 19.363667ms
	W0914 15:17:53.412052    3830 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-463000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-463000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 15:17:53.419937    3830 out.go:177] 
	W0914 15:17:53.423784    3830 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 15:17:53.423809    3830 out.go:239] * 
	* 
	W0914 15:17:53.426416    3830 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 15:17:53.433874    3830 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:297: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-463000" : exit status 80
multinode_test.go:300: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-463000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-463000 -n multinode-463000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-463000 -n multinode-463000: exit status 7 (33.008042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-463000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (5.38s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-463000 node delete m03
multinode_test.go:394: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-463000 node delete m03: exit status 89 (38.348041ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p multinode-463000"

                                                
                                                
-- /stdout --
multinode_test.go:396: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-463000 node delete m03": exit status 89
multinode_test.go:400: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-463000 status --alsologtostderr
multinode_test.go:400: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-463000 status --alsologtostderr: exit status 7 (32.330792ms)

                                                
                                                
-- stdout --
	multinode-463000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 15:17:53.618358    3847 out.go:296] Setting OutFile to fd 1 ...
	I0914 15:17:53.618503    3847 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 15:17:53.618506    3847 out.go:309] Setting ErrFile to fd 2...
	I0914 15:17:53.618508    3847 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 15:17:53.618626    3847 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17243-1006/.minikube/bin
	I0914 15:17:53.618756    3847 out.go:303] Setting JSON to false
	I0914 15:17:53.618768    3847 mustload.go:65] Loading cluster: multinode-463000
	I0914 15:17:53.618835    3847 notify.go:220] Checking for updates...
	I0914 15:17:53.618981    3847 config.go:182] Loaded profile config "multinode-463000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0914 15:17:53.618986    3847 status.go:255] checking status of multinode-463000 ...
	I0914 15:17:53.619190    3847 status.go:330] multinode-463000 host status = "Stopped" (err=<nil>)
	I0914 15:17:53.619193    3847 status.go:343] host is not running, skipping remaining checks
	I0914 15:17:53.619195    3847 status.go:257] multinode-463000 status: &{Name:multinode-463000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:402: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-463000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-463000 -n multinode-463000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-463000 -n multinode-463000: exit status 7 (29.566958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-463000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (0.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-463000 stop
multinode_test.go:320: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-463000 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-463000 status: exit status 7 (28.876ms)

                                                
                                                
-- stdout --
	multinode-463000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-463000 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-463000 status --alsologtostderr: exit status 7 (28.913917ms)

                                                
                                                
-- stdout --
	multinode-463000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 15:17:53.764754    3855 out.go:296] Setting OutFile to fd 1 ...
	I0914 15:17:53.764887    3855 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 15:17:53.764890    3855 out.go:309] Setting ErrFile to fd 2...
	I0914 15:17:53.764892    3855 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 15:17:53.765019    3855 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17243-1006/.minikube/bin
	I0914 15:17:53.765138    3855 out.go:303] Setting JSON to false
	I0914 15:17:53.765149    3855 mustload.go:65] Loading cluster: multinode-463000
	I0914 15:17:53.765214    3855 notify.go:220] Checking for updates...
	I0914 15:17:53.765362    3855 config.go:182] Loaded profile config "multinode-463000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0914 15:17:53.765370    3855 status.go:255] checking status of multinode-463000 ...
	I0914 15:17:53.765576    3855 status.go:330] multinode-463000 host status = "Stopped" (err=<nil>)
	I0914 15:17:53.765580    3855 status.go:343] host is not running, skipping remaining checks
	I0914 15:17:53.765582    3855 status.go:257] multinode-463000 status: &{Name:multinode-463000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:333: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-463000 status --alsologtostderr": multinode-463000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:337: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-463000 status --alsologtostderr": multinode-463000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-463000 -n multinode-463000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-463000 -n multinode-463000: exit status 7 (31.192917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-463000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (0.15s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:354: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-463000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:354: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-463000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.181151333s)

                                                
                                                
-- stdout --
	* [multinode-463000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17243
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17243-1006/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17243-1006/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node multinode-463000 in cluster multinode-463000
	* Restarting existing qemu2 VM for "multinode-463000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-463000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 15:17:53.824717    3859 out.go:296] Setting OutFile to fd 1 ...
	I0914 15:17:53.824823    3859 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 15:17:53.824826    3859 out.go:309] Setting ErrFile to fd 2...
	I0914 15:17:53.824829    3859 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 15:17:53.824958    3859 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17243-1006/.minikube/bin
	I0914 15:17:53.825897    3859 out.go:303] Setting JSON to false
	I0914 15:17:53.840905    3859 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2847,"bootTime":1694727026,"procs":407,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0914 15:17:53.840994    3859 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0914 15:17:53.845876    3859 out.go:177] * [multinode-463000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0914 15:17:53.852773    3859 out.go:177]   - MINIKUBE_LOCATION=17243
	I0914 15:17:53.856821    3859 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17243-1006/kubeconfig
	I0914 15:17:53.852894    3859 notify.go:220] Checking for updates...
	I0914 15:17:53.863741    3859 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 15:17:53.867764    3859 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 15:17:53.870787    3859 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17243-1006/.minikube
	I0914 15:17:53.873810    3859 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 15:17:53.877131    3859 config.go:182] Loaded profile config "multinode-463000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0914 15:17:53.877389    3859 driver.go:373] Setting default libvirt URI to qemu:///system
	I0914 15:17:53.881757    3859 out.go:177] * Using the qemu2 driver based on existing profile
	I0914 15:17:53.888833    3859 start.go:298] selected driver: qemu2
	I0914 15:17:53.888838    3859 start.go:902] validating driver "qemu2" against &{Name:multinode-463000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.1 ClusterName:multinode-463000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 15:17:53.888913    3859 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 15:17:53.890882    3859 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 15:17:53.890908    3859 cni.go:84] Creating CNI manager for ""
	I0914 15:17:53.890913    3859 cni.go:136] 1 nodes found, recommending kindnet
	I0914 15:17:53.890919    3859 start_flags.go:321] config:
	{Name:multinode-463000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:multinode-463000 Namespace:default API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 15:17:53.895032    3859 iso.go:125] acquiring lock: {Name:mkd5b68b252c18146a825ff5365883948b3c6983 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 15:17:53.901809    3859 out.go:177] * Starting control plane node multinode-463000 in cluster multinode-463000
	I0914 15:17:53.905630    3859 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0914 15:17:53.905651    3859 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0914 15:17:53.905662    3859 cache.go:57] Caching tarball of preloaded images
	I0914 15:17:53.905742    3859 preload.go:174] Found /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0914 15:17:53.905754    3859 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0914 15:17:53.905839    3859 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/multinode-463000/config.json ...
	I0914 15:17:53.906213    3859 start.go:365] acquiring machines lock for multinode-463000: {Name:mk831f2bd5a9a0c932c8a59972319d68edb53d07 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 15:17:53.906246    3859 start.go:369] acquired machines lock for "multinode-463000" in 26.458µs
	I0914 15:17:53.906256    3859 start.go:96] Skipping create...Using existing machine configuration
	I0914 15:17:53.906261    3859 fix.go:54] fixHost starting: 
	I0914 15:17:53.906385    3859 fix.go:102] recreateIfNeeded on multinode-463000: state=Stopped err=<nil>
	W0914 15:17:53.906394    3859 fix.go:128] unexpected machine state, will restart: <nil>
	I0914 15:17:53.910770    3859 out.go:177] * Restarting existing qemu2 VM for "multinode-463000" ...
	I0914 15:17:53.918862    3859 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/multinode-463000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/multinode-463000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/multinode-463000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:20:28:b6:76:7d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/multinode-463000/disk.qcow2
	I0914 15:17:53.920762    3859 main.go:141] libmachine: STDOUT: 
	I0914 15:17:53.920784    3859 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 15:17:53.920814    3859 fix.go:56] fixHost completed within 14.552125ms
	I0914 15:17:53.920820    3859 start.go:83] releasing machines lock for "multinode-463000", held for 14.569875ms
	W0914 15:17:53.920826    3859 start.go:688] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 15:17:53.920866    3859 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 15:17:53.920871    3859 start.go:703] Will try again in 5 seconds ...
	I0914 15:17:58.922933    3859 start.go:365] acquiring machines lock for multinode-463000: {Name:mk831f2bd5a9a0c932c8a59972319d68edb53d07 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 15:17:58.923272    3859 start.go:369] acquired machines lock for "multinode-463000" in 268.584µs
	I0914 15:17:58.923452    3859 start.go:96] Skipping create...Using existing machine configuration
	I0914 15:17:58.923471    3859 fix.go:54] fixHost starting: 
	I0914 15:17:58.924145    3859 fix.go:102] recreateIfNeeded on multinode-463000: state=Stopped err=<nil>
	W0914 15:17:58.924174    3859 fix.go:128] unexpected machine state, will restart: <nil>
	I0914 15:17:58.931556    3859 out.go:177] * Restarting existing qemu2 VM for "multinode-463000" ...
	I0914 15:17:58.935819    3859 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/multinode-463000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/multinode-463000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/multinode-463000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:20:28:b6:76:7d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/multinode-463000/disk.qcow2
	I0914 15:17:58.944499    3859 main.go:141] libmachine: STDOUT: 
	I0914 15:17:58.944554    3859 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 15:17:58.944627    3859 fix.go:56] fixHost completed within 21.154833ms
	I0914 15:17:58.944644    3859 start.go:83] releasing machines lock for "multinode-463000", held for 21.349542ms
	W0914 15:17:58.944819    3859 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-463000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-463000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 15:17:58.951522    3859 out.go:177] 
	W0914 15:17:58.955629    3859 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 15:17:58.955655    3859 out.go:239] * 
	* 
	W0914 15:17:58.958530    3859 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 15:17:58.966486    3859 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:356: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-463000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-463000 -n multinode-463000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-463000 -n multinode-463000: exit status 7 (67.126583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-463000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (19.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-463000
multinode_test.go:452: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-463000-m01 --driver=qemu2 
E0914 15:18:07.451504    1425 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/client.crt: no such file or directory
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-463000-m01 --driver=qemu2 : exit status 80 (9.719738375s)

                                                
                                                
-- stdout --
	* [multinode-463000-m01] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17243
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17243-1006/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17243-1006/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node multinode-463000-m01 in cluster multinode-463000-m01
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-463000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-463000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-463000-m02 --driver=qemu2 
E0914 15:18:10.361555    1425 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/ingress-addon-legacy-438000/client.crt: no such file or directory
multinode_test.go:460: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-463000-m02 --driver=qemu2 : exit status 80 (10.015092875s)

                                                
                                                
-- stdout --
	* [multinode-463000-m02] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17243
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17243-1006/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17243-1006/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node multinode-463000-m02 in cluster multinode-463000-m02
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-463000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-463000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:462: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-463000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-463000
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-463000: exit status 89 (77.819583ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p multinode-463000"

                                                
                                                
-- /stdout --
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-463000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-463000 -n multinode-463000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-463000 -n multinode-463000: exit status 7 (29.993791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-463000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (19.98s)

                                                
                                    
x
+
TestPreload (9.88s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-786000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-786000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.708694292s)

                                                
                                                
-- stdout --
	* [test-preload-786000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17243
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17243-1006/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17243-1006/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node test-preload-786000 in cluster test-preload-786000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-786000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 15:18:19.173642    3924 out.go:296] Setting OutFile to fd 1 ...
	I0914 15:18:19.173761    3924 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 15:18:19.173764    3924 out.go:309] Setting ErrFile to fd 2...
	I0914 15:18:19.173767    3924 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 15:18:19.173890    3924 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17243-1006/.minikube/bin
	I0914 15:18:19.174915    3924 out.go:303] Setting JSON to false
	I0914 15:18:19.190050    3924 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2873,"bootTime":1694727026,"procs":410,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0914 15:18:19.190139    3924 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0914 15:18:19.193890    3924 out.go:177] * [test-preload-786000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0914 15:18:19.201771    3924 out.go:177]   - MINIKUBE_LOCATION=17243
	I0914 15:18:19.205910    3924 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17243-1006/kubeconfig
	I0914 15:18:19.201886    3924 notify.go:220] Checking for updates...
	I0914 15:18:19.210230    3924 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 15:18:19.212876    3924 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 15:18:19.215910    3924 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17243-1006/.minikube
	I0914 15:18:19.218946    3924 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 15:18:19.222347    3924 config.go:182] Loaded profile config "multinode-463000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0914 15:18:19.222390    3924 driver.go:373] Setting default libvirt URI to qemu:///system
	I0914 15:18:19.226890    3924 out.go:177] * Using the qemu2 driver based on user configuration
	I0914 15:18:19.233917    3924 start.go:298] selected driver: qemu2
	I0914 15:18:19.233922    3924 start.go:902] validating driver "qemu2" against <nil>
	I0914 15:18:19.233928    3924 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 15:18:19.235889    3924 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0914 15:18:19.238869    3924 out.go:177] * Automatically selected the socket_vmnet network
	I0914 15:18:19.241996    3924 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 15:18:19.242026    3924 cni.go:84] Creating CNI manager for ""
	I0914 15:18:19.242033    3924 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0914 15:18:19.242038    3924 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0914 15:18:19.242043    3924 start_flags.go:321] config:
	{Name:test-preload-786000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-786000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock:
SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 15:18:19.246057    3924 iso.go:125] acquiring lock: {Name:mkd5b68b252c18146a825ff5365883948b3c6983 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 15:18:19.252973    3924 out.go:177] * Starting control plane node test-preload-786000 in cluster test-preload-786000
	I0914 15:18:19.255855    3924 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0914 15:18:19.255945    3924 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/test-preload-786000/config.json ...
	I0914 15:18:19.255961    3924 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/test-preload-786000/config.json: {Name:mkca4b24aa254d2615f77dc61992334e901a0939 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 15:18:19.255995    3924 cache.go:107] acquiring lock: {Name:mkd53f39c8984a1a6e842ba1d0d45a9f41a4874f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 15:18:19.256016    3924 cache.go:107] acquiring lock: {Name:mkc25fea704d54c03a29a54cca16f18863ff7e9f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 15:18:19.255997    3924 cache.go:107] acquiring lock: {Name:mk72427bcf69433fa3bf845de2b42ff62f56d4f5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 15:18:19.256143    3924 cache.go:107] acquiring lock: {Name:mk510e0dee9575e97b518258a8fa51895577a920 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 15:18:19.256167    3924 cache.go:107] acquiring lock: {Name:mkc475b589f1a548a3e226799f7920a99845925b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 15:18:19.256230    3924 cache.go:107] acquiring lock: {Name:mk1b4c02e52a436272e48646a6c556e7bf71b3b6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 15:18:19.256244    3924 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0914 15:18:19.256251    3924 cache.go:107] acquiring lock: {Name:mkda40073662d1c9ad3c216bc4be00c93f919b10 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 15:18:19.256265    3924 start.go:365] acquiring machines lock for test-preload-786000: {Name:mk831f2bd5a9a0c932c8a59972319d68edb53d07 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 15:18:19.256285    3924 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 15:18:19.256331    3924 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0914 15:18:19.256335    3924 start.go:369] acquired machines lock for "test-preload-786000" in 48.958µs
	I0914 15:18:19.256347    3924 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0914 15:18:19.256362    3924 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0914 15:18:19.256393    3924 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0914 15:18:19.256376    3924 start.go:93] Provisioning new machine with config: &{Name:test-preload-786000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-786000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mo
untMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 15:18:19.256489    3924 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 15:18:19.256449    3924 cache.go:107] acquiring lock: {Name:mk3357439e47cf621fd518d92084cdd45880d6d5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 15:18:19.264860    3924 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0914 15:18:19.256528    3924 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0914 15:18:19.260972    3924 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0914 15:18:19.269779    3924 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0914 15:18:19.269819    3924 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0914 15:18:19.273598    3924 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0914 15:18:19.273616    3924 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0914 15:18:19.273633    3924 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0914 15:18:19.273661    3924 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0914 15:18:19.273682    3924 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0914 15:18:19.273703    3924 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 15:18:19.280839    3924 start.go:159] libmachine.API.Create for "test-preload-786000" (driver="qemu2")
	I0914 15:18:19.280860    3924 client.go:168] LocalClient.Create starting
	I0914 15:18:19.280924    3924 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/ca.pem
	I0914 15:18:19.280954    3924 main.go:141] libmachine: Decoding PEM data...
	I0914 15:18:19.280965    3924 main.go:141] libmachine: Parsing certificate...
	I0914 15:18:19.281005    3924 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/cert.pem
	I0914 15:18:19.281022    3924 main.go:141] libmachine: Decoding PEM data...
	I0914 15:18:19.281030    3924 main.go:141] libmachine: Parsing certificate...
	I0914 15:18:19.281354    3924 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17243-1006/.minikube/cache/iso/arm64/minikube-v1.31.0-1694625400-17243-arm64.iso...
	I0914 15:18:19.441022    3924 main.go:141] libmachine: Creating SSH key...
	I0914 15:18:19.510419    3924 main.go:141] libmachine: Creating Disk image...
	I0914 15:18:19.510429    3924 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 15:18:19.510576    3924 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/test-preload-786000/disk.qcow2.raw /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/test-preload-786000/disk.qcow2
	I0914 15:18:19.520090    3924 main.go:141] libmachine: STDOUT: 
	I0914 15:18:19.520110    3924 main.go:141] libmachine: STDERR: 
	I0914 15:18:19.520180    3924 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/test-preload-786000/disk.qcow2 +20000M
	I0914 15:18:19.527869    3924 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 15:18:19.527892    3924 main.go:141] libmachine: STDERR: 
	I0914 15:18:19.527912    3924 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/test-preload-786000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/test-preload-786000/disk.qcow2
	I0914 15:18:19.527917    3924 main.go:141] libmachine: Starting QEMU VM...
	I0914 15:18:19.527962    3924 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/test-preload-786000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/test-preload-786000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/test-preload-786000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:fe:66:dc:4f:ca -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/test-preload-786000/disk.qcow2
	I0914 15:18:19.529506    3924 main.go:141] libmachine: STDOUT: 
	I0914 15:18:19.529518    3924 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 15:18:19.529539    3924 client.go:171] LocalClient.Create took 248.680375ms
	I0914 15:18:20.128455    3924 cache.go:162] opening:  /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0914 15:18:20.186428    3924 cache.go:162] opening:  /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I0914 15:18:20.368541    3924 cache.go:162] opening:  /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0914 15:18:20.598572    3924 cache.go:162] opening:  /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	W0914 15:18:20.774375    3924 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0914 15:18:20.774403    3924 cache.go:162] opening:  /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0914 15:18:20.977261    3924 cache.go:162] opening:  /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I0914 15:18:21.255669    3924 cache.go:162] opening:  /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0914 15:18:21.381611    3924 cache.go:157] /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0914 15:18:21.381626    3924 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/17243-1006/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 2.12560475s
	I0914 15:18:21.381645    3924 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	I0914 15:18:21.529738    3924 start.go:128] duration metric: createHost completed in 2.273278583s
	I0914 15:18:21.529770    3924 start.go:83] releasing machines lock for "test-preload-786000", held for 2.273472792s
	W0914 15:18:21.529816    3924 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 15:18:21.541207    3924 out.go:177] * Deleting "test-preload-786000" in qemu2 ...
	W0914 15:18:21.559286    3924 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 15:18:21.559326    3924 start.go:703] Will try again in 5 seconds ...
	W0914 15:18:22.052171    3924 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0914 15:18:22.052301    3924 cache.go:162] opening:  /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0914 15:18:22.822377    3924 cache.go:157] /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0914 15:18:22.822440    3924 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/17243-1006/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 3.566517167s
	I0914 15:18:22.822479    3924 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0914 15:18:22.926135    3924 cache.go:157] /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0914 15:18:22.926181    3924 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/17243-1006/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 3.669805792s
	I0914 15:18:22.926211    3924 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0914 15:18:23.057496    3924 cache.go:157] /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0914 15:18:23.057583    3924 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/17243-1006/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 3.801663791s
	I0914 15:18:23.057614    3924 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0914 15:18:23.697512    3924 cache.go:157] /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0914 15:18:23.697570    3924 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/17243-1006/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 4.44167825s
	I0914 15:18:23.697600    3924 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0914 15:18:24.401348    3924 cache.go:157] /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0914 15:18:24.401401    3924 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/17243-1006/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 5.145484916s
	I0914 15:18:24.401432    3924 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0914 15:18:25.744978    3924 cache.go:157] /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0914 15:18:25.745026    3924 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/17243-1006/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 6.489037s
	I0914 15:18:25.745077    3924 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0914 15:18:26.559473    3924 start.go:365] acquiring machines lock for test-preload-786000: {Name:mk831f2bd5a9a0c932c8a59972319d68edb53d07 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 15:18:26.559887    3924 start.go:369] acquired machines lock for "test-preload-786000" in 329.042µs
	I0914 15:18:26.560006    3924 start.go:93] Provisioning new machine with config: &{Name:test-preload-786000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-786000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mo
untMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 15:18:26.560255    3924 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 15:18:26.565885    3924 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0914 15:18:26.611551    3924 start.go:159] libmachine.API.Create for "test-preload-786000" (driver="qemu2")
	I0914 15:18:26.611616    3924 client.go:168] LocalClient.Create starting
	I0914 15:18:26.611748    3924 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/ca.pem
	I0914 15:18:26.611802    3924 main.go:141] libmachine: Decoding PEM data...
	I0914 15:18:26.611828    3924 main.go:141] libmachine: Parsing certificate...
	I0914 15:18:26.611917    3924 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/cert.pem
	I0914 15:18:26.611953    3924 main.go:141] libmachine: Decoding PEM data...
	I0914 15:18:26.611974    3924 main.go:141] libmachine: Parsing certificate...
	I0914 15:18:26.612543    3924 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17243-1006/.minikube/cache/iso/arm64/minikube-v1.31.0-1694625400-17243-arm64.iso...
	I0914 15:18:26.765817    3924 main.go:141] libmachine: Creating SSH key...
	I0914 15:18:26.795895    3924 main.go:141] libmachine: Creating Disk image...
	I0914 15:18:26.795900    3924 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 15:18:26.796032    3924 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/test-preload-786000/disk.qcow2.raw /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/test-preload-786000/disk.qcow2
	I0914 15:18:26.804538    3924 main.go:141] libmachine: STDOUT: 
	I0914 15:18:26.804557    3924 main.go:141] libmachine: STDERR: 
	I0914 15:18:26.804616    3924 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/test-preload-786000/disk.qcow2 +20000M
	I0914 15:18:26.811936    3924 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 15:18:26.811957    3924 main.go:141] libmachine: STDERR: 
	I0914 15:18:26.811975    3924 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/test-preload-786000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/test-preload-786000/disk.qcow2
	I0914 15:18:26.811983    3924 main.go:141] libmachine: Starting QEMU VM...
	I0914 15:18:26.812039    3924 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/test-preload-786000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/test-preload-786000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/test-preload-786000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:e8:85:52:54:19 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/test-preload-786000/disk.qcow2
	I0914 15:18:26.813597    3924 main.go:141] libmachine: STDOUT: 
	I0914 15:18:26.813616    3924 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 15:18:26.813631    3924 client.go:171] LocalClient.Create took 202.013ms
	I0914 15:18:28.275759    3924 cache.go:157] /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 exists
	I0914 15:18:28.275825    3924 cache.go:96] cache image "registry.k8s.io/etcd:3.5.3-0" -> "/Users/jenkins/minikube-integration/17243-1006/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0" took 9.019816958s
	I0914 15:18:28.275854    3924 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.3-0 -> /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 succeeded
	I0914 15:18:28.275929    3924 cache.go:87] Successfully saved all images to host disk.
	I0914 15:18:28.815772    3924 start.go:128] duration metric: createHost completed in 2.255543292s
	I0914 15:18:28.815924    3924 start.go:83] releasing machines lock for "test-preload-786000", held for 2.255990416s
	W0914 15:18:28.816182    3924 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-786000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-786000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 15:18:28.825628    3924 out.go:177] 
	W0914 15:18:28.829727    3924 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 15:18:28.829751    3924 out.go:239] * 
	* 
	W0914 15:18:28.832492    3924 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 15:18:28.840605    3924 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-786000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:523: *** TestPreload FAILED at 2023-09-14 15:18:28.859365 -0700 PDT m=+2576.065964584
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-786000 -n test-preload-786000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-786000 -n test-preload-786000: exit status 7 (64.1205ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-786000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-786000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-786000
--- FAIL: TestPreload (9.88s)

                                                
                                    
x
+
TestScheduledStopUnix (9.94s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-322000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-322000 --memory=2048 --driver=qemu2 : exit status 80 (9.779049084s)

                                                
                                                
-- stdout --
	* [scheduled-stop-322000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17243
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17243-1006/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17243-1006/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node scheduled-stop-322000 in cluster scheduled-stop-322000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-322000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-322000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-322000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17243
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17243-1006/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17243-1006/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node scheduled-stop-322000 in cluster scheduled-stop-322000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-322000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-322000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:523: *** TestScheduledStopUnix FAILED at 2023-09-14 15:18:38.806109 -0700 PDT m=+2586.012922417
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-322000 -n scheduled-stop-322000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-322000 -n scheduled-stop-322000: exit status 7 (67.611083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-322000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-322000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-322000
--- FAIL: TestScheduledStopUnix (9.94s)

                                                
                                    
x
+
TestSkaffold (13.28s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe1543883542 version
skaffold_test.go:63: skaffold version: v2.7.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-582000 --memory=2600 --driver=qemu2 
E0914 15:18:51.323488    1425 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/ingress-addon-legacy-438000/client.crt: no such file or directory
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-582000 --memory=2600 --driver=qemu2 : exit status 80 (9.900238208s)

                                                
                                                
-- stdout --
	* [skaffold-582000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17243
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17243-1006/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17243-1006/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node skaffold-582000 in cluster skaffold-582000
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-582000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-582000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-582000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17243
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17243-1006/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17243-1006/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node skaffold-582000 in cluster skaffold-582000
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-582000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-582000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:523: *** TestSkaffold FAILED at 2023-09-14 15:18:52.087916 -0700 PDT m=+2599.295015459
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-582000 -n skaffold-582000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-582000 -n skaffold-582000: exit status 7 (63.192375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-582000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-582000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-582000
--- FAIL: TestSkaffold (13.28s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (148.2s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:107: v1.6.2 release installation failed: bad response code: 404
panic.go:523: *** TestRunningBinaryUpgrade FAILED at 2023-09-14 15:22:00.208638 -0700 PDT m=+2787.419792542
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-979000 -n running-upgrade-979000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-979000 -n running-upgrade-979000: exit status 85 (84.952833ms)

                                                
                                                
-- stdout --
	* Profile "running-upgrade-979000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p running-upgrade-979000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "running-upgrade-979000" host is not running, skipping log retrieval (state="* Profile \"running-upgrade-979000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p running-upgrade-979000\"")
helpers_test.go:175: Cleaning up "running-upgrade-979000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-979000
--- FAIL: TestRunningBinaryUpgrade (148.20s)

                                                
                                    
x
+
TestKubernetesUpgrade (15.36s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-525000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:235: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-525000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.841708584s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-525000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17243
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17243-1006/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17243-1006/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node kubernetes-upgrade-525000 in cluster kubernetes-upgrade-525000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-525000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 15:22:00.568748    4451 out.go:296] Setting OutFile to fd 1 ...
	I0914 15:22:00.568859    4451 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 15:22:00.568862    4451 out.go:309] Setting ErrFile to fd 2...
	I0914 15:22:00.568865    4451 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 15:22:00.569011    4451 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17243-1006/.minikube/bin
	I0914 15:22:00.570070    4451 out.go:303] Setting JSON to false
	I0914 15:22:00.585314    4451 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3094,"bootTime":1694727026,"procs":407,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0914 15:22:00.585397    4451 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0914 15:22:00.590776    4451 out.go:177] * [kubernetes-upgrade-525000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0914 15:22:00.597725    4451 out.go:177]   - MINIKUBE_LOCATION=17243
	I0914 15:22:00.600738    4451 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17243-1006/kubeconfig
	I0914 15:22:00.597794    4451 notify.go:220] Checking for updates...
	I0914 15:22:00.604552    4451 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 15:22:00.607690    4451 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 15:22:00.610700    4451 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17243-1006/.minikube
	I0914 15:22:00.613804    4451 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 15:22:00.617077    4451 config.go:182] Loaded profile config "cert-expiration-334000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0914 15:22:00.617138    4451 config.go:182] Loaded profile config "multinode-463000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0914 15:22:00.617196    4451 driver.go:373] Setting default libvirt URI to qemu:///system
	I0914 15:22:00.621648    4451 out.go:177] * Using the qemu2 driver based on user configuration
	I0914 15:22:00.628711    4451 start.go:298] selected driver: qemu2
	I0914 15:22:00.628715    4451 start.go:902] validating driver "qemu2" against <nil>
	I0914 15:22:00.628721    4451 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 15:22:00.630715    4451 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0914 15:22:00.633681    4451 out.go:177] * Automatically selected the socket_vmnet network
	I0914 15:22:00.636794    4451 start_flags.go:904] Wait components to verify : map[apiserver:true system_pods:true]
	I0914 15:22:00.636821    4451 cni.go:84] Creating CNI manager for ""
	I0914 15:22:00.636836    4451 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0914 15:22:00.636839    4451 start_flags.go:321] config:
	{Name:kubernetes-upgrade-525000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-525000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 15:22:00.640962    4451 iso.go:125] acquiring lock: {Name:mkd5b68b252c18146a825ff5365883948b3c6983 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 15:22:00.647706    4451 out.go:177] * Starting control plane node kubernetes-upgrade-525000 in cluster kubernetes-upgrade-525000
	I0914 15:22:00.650672    4451 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0914 15:22:00.650691    4451 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0914 15:22:00.650706    4451 cache.go:57] Caching tarball of preloaded images
	I0914 15:22:00.650778    4451 preload.go:174] Found /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0914 15:22:00.650791    4451 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0914 15:22:00.650870    4451 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/kubernetes-upgrade-525000/config.json ...
	I0914 15:22:00.650892    4451 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/kubernetes-upgrade-525000/config.json: {Name:mkb7a0d1ef4ec93b38d055d912dbf6e7068d87e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 15:22:00.651101    4451 start.go:365] acquiring machines lock for kubernetes-upgrade-525000: {Name:mk831f2bd5a9a0c932c8a59972319d68edb53d07 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 15:22:00.651134    4451 start.go:369] acquired machines lock for "kubernetes-upgrade-525000" in 25µs
	I0914 15:22:00.651149    4451 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-525000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-525000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 15:22:00.651182    4451 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 15:22:00.655737    4451 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0914 15:22:00.671518    4451 start.go:159] libmachine.API.Create for "kubernetes-upgrade-525000" (driver="qemu2")
	I0914 15:22:00.671542    4451 client.go:168] LocalClient.Create starting
	I0914 15:22:00.671599    4451 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/ca.pem
	I0914 15:22:00.671623    4451 main.go:141] libmachine: Decoding PEM data...
	I0914 15:22:00.671635    4451 main.go:141] libmachine: Parsing certificate...
	I0914 15:22:00.671676    4451 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/cert.pem
	I0914 15:22:00.671694    4451 main.go:141] libmachine: Decoding PEM data...
	I0914 15:22:00.671709    4451 main.go:141] libmachine: Parsing certificate...
	I0914 15:22:00.672072    4451 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17243-1006/.minikube/cache/iso/arm64/minikube-v1.31.0-1694625400-17243-arm64.iso...
	I0914 15:22:00.788720    4451 main.go:141] libmachine: Creating SSH key...
	I0914 15:22:00.954666    4451 main.go:141] libmachine: Creating Disk image...
	I0914 15:22:00.954676    4451 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 15:22:00.954820    4451 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/kubernetes-upgrade-525000/disk.qcow2.raw /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/kubernetes-upgrade-525000/disk.qcow2
	I0914 15:22:00.963272    4451 main.go:141] libmachine: STDOUT: 
	I0914 15:22:00.963287    4451 main.go:141] libmachine: STDERR: 
	I0914 15:22:00.963339    4451 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/kubernetes-upgrade-525000/disk.qcow2 +20000M
	I0914 15:22:00.970443    4451 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 15:22:00.970454    4451 main.go:141] libmachine: STDERR: 
	I0914 15:22:00.970471    4451 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/kubernetes-upgrade-525000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/kubernetes-upgrade-525000/disk.qcow2
	I0914 15:22:00.970480    4451 main.go:141] libmachine: Starting QEMU VM...
	I0914 15:22:00.970526    4451 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/kubernetes-upgrade-525000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/kubernetes-upgrade-525000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/kubernetes-upgrade-525000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:64:e5:78:c3:36 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/kubernetes-upgrade-525000/disk.qcow2
	I0914 15:22:00.972027    4451 main.go:141] libmachine: STDOUT: 
	I0914 15:22:00.972038    4451 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 15:22:00.972055    4451 client.go:171] LocalClient.Create took 300.515333ms
	I0914 15:22:02.974218    4451 start.go:128] duration metric: createHost completed in 2.323054667s
	I0914 15:22:02.974323    4451 start.go:83] releasing machines lock for "kubernetes-upgrade-525000", held for 2.323229791s
	W0914 15:22:02.974394    4451 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 15:22:02.982852    4451 out.go:177] * Deleting "kubernetes-upgrade-525000" in qemu2 ...
	W0914 15:22:03.003499    4451 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 15:22:03.003531    4451 start.go:703] Will try again in 5 seconds ...
	I0914 15:22:08.005727    4451 start.go:365] acquiring machines lock for kubernetes-upgrade-525000: {Name:mk831f2bd5a9a0c932c8a59972319d68edb53d07 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 15:22:08.006224    4451 start.go:369] acquired machines lock for "kubernetes-upgrade-525000" in 378.208µs
	I0914 15:22:08.006368    4451 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-525000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-525000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 15:22:08.006638    4451 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 15:22:08.017518    4451 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0914 15:22:08.064281    4451 start.go:159] libmachine.API.Create for "kubernetes-upgrade-525000" (driver="qemu2")
	I0914 15:22:08.064330    4451 client.go:168] LocalClient.Create starting
	I0914 15:22:08.064469    4451 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/ca.pem
	I0914 15:22:08.064539    4451 main.go:141] libmachine: Decoding PEM data...
	I0914 15:22:08.064563    4451 main.go:141] libmachine: Parsing certificate...
	I0914 15:22:08.064646    4451 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/cert.pem
	I0914 15:22:08.064683    4451 main.go:141] libmachine: Decoding PEM data...
	I0914 15:22:08.064698    4451 main.go:141] libmachine: Parsing certificate...
	I0914 15:22:08.065267    4451 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17243-1006/.minikube/cache/iso/arm64/minikube-v1.31.0-1694625400-17243-arm64.iso...
	I0914 15:22:08.192339    4451 main.go:141] libmachine: Creating SSH key...
	I0914 15:22:08.325876    4451 main.go:141] libmachine: Creating Disk image...
	I0914 15:22:08.325882    4451 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 15:22:08.326028    4451 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/kubernetes-upgrade-525000/disk.qcow2.raw /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/kubernetes-upgrade-525000/disk.qcow2
	I0914 15:22:08.334467    4451 main.go:141] libmachine: STDOUT: 
	I0914 15:22:08.334480    4451 main.go:141] libmachine: STDERR: 
	I0914 15:22:08.334532    4451 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/kubernetes-upgrade-525000/disk.qcow2 +20000M
	I0914 15:22:08.341619    4451 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 15:22:08.341641    4451 main.go:141] libmachine: STDERR: 
	I0914 15:22:08.341654    4451 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/kubernetes-upgrade-525000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/kubernetes-upgrade-525000/disk.qcow2
	I0914 15:22:08.341661    4451 main.go:141] libmachine: Starting QEMU VM...
	I0914 15:22:08.341707    4451 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/kubernetes-upgrade-525000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/kubernetes-upgrade-525000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/kubernetes-upgrade-525000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:26:d8:39:f9:04 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/kubernetes-upgrade-525000/disk.qcow2
	I0914 15:22:08.343242    4451 main.go:141] libmachine: STDOUT: 
	I0914 15:22:08.343253    4451 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 15:22:08.343264    4451 client.go:171] LocalClient.Create took 278.931583ms
	I0914 15:22:10.345409    4451 start.go:128] duration metric: createHost completed in 2.338789458s
	I0914 15:22:10.345489    4451 start.go:83] releasing machines lock for "kubernetes-upgrade-525000", held for 2.339287709s
	W0914 15:22:10.345970    4451 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-525000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-525000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 15:22:10.354730    4451 out.go:177] 
	W0914 15:22:10.358738    4451 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 15:22:10.358762    4451 out.go:239] * 
	* 
	W0914 15:22:10.361264    4451 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 15:22:10.369669    4451 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:237: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-525000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:240: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-525000
version_upgrade_test.go:245: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-525000 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-525000 status --format={{.Host}}: exit status 7 (35.73475ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-525000 --memory=2200 --kubernetes-version=v1.28.1 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-525000 --memory=2200 --kubernetes-version=v1.28.1 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.188571292s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-525000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17243
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17243-1006/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17243-1006/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node kubernetes-upgrade-525000 in cluster kubernetes-upgrade-525000
	* Restarting existing qemu2 VM for "kubernetes-upgrade-525000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-525000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 15:22:10.545745    4471 out.go:296] Setting OutFile to fd 1 ...
	I0914 15:22:10.545852    4471 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 15:22:10.545858    4471 out.go:309] Setting ErrFile to fd 2...
	I0914 15:22:10.545862    4471 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 15:22:10.545983    4471 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17243-1006/.minikube/bin
	I0914 15:22:10.546950    4471 out.go:303] Setting JSON to false
	I0914 15:22:10.562827    4471 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3104,"bootTime":1694727026,"procs":409,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0914 15:22:10.562891    4471 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0914 15:22:10.567683    4471 out.go:177] * [kubernetes-upgrade-525000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0914 15:22:10.578504    4471 out.go:177]   - MINIKUBE_LOCATION=17243
	I0914 15:22:10.574620    4471 notify.go:220] Checking for updates...
	I0914 15:22:10.586621    4471 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17243-1006/kubeconfig
	I0914 15:22:10.589522    4471 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 15:22:10.593578    4471 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 15:22:10.596661    4471 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17243-1006/.minikube
	I0914 15:22:10.599609    4471 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 15:22:10.602997    4471 config.go:182] Loaded profile config "kubernetes-upgrade-525000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0914 15:22:10.603270    4471 driver.go:373] Setting default libvirt URI to qemu:///system
	I0914 15:22:10.607627    4471 out.go:177] * Using the qemu2 driver based on existing profile
	I0914 15:22:10.614608    4471 start.go:298] selected driver: qemu2
	I0914 15:22:10.614613    4471 start.go:902] validating driver "qemu2" against &{Name:kubernetes-upgrade-525000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubern
etesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-525000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 15:22:10.614689    4471 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 15:22:10.616673    4471 cni.go:84] Creating CNI manager for ""
	I0914 15:22:10.616691    4471 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0914 15:22:10.616695    4471 start_flags.go:321] config:
	{Name:kubernetes-upgrade-525000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:kubernetes-upgrade-525000 Nam
espace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFi
rmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 15:22:10.620995    4471 iso.go:125] acquiring lock: {Name:mkd5b68b252c18146a825ff5365883948b3c6983 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 15:22:10.627662    4471 out.go:177] * Starting control plane node kubernetes-upgrade-525000 in cluster kubernetes-upgrade-525000
	I0914 15:22:10.631620    4471 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0914 15:22:10.631637    4471 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0914 15:22:10.631646    4471 cache.go:57] Caching tarball of preloaded images
	I0914 15:22:10.631710    4471 preload.go:174] Found /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0914 15:22:10.631716    4471 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0914 15:22:10.631777    4471 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/kubernetes-upgrade-525000/config.json ...
	I0914 15:22:10.632151    4471 start.go:365] acquiring machines lock for kubernetes-upgrade-525000: {Name:mk831f2bd5a9a0c932c8a59972319d68edb53d07 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 15:22:10.632179    4471 start.go:369] acquired machines lock for "kubernetes-upgrade-525000" in 21.791µs
	I0914 15:22:10.632190    4471 start.go:96] Skipping create...Using existing machine configuration
	I0914 15:22:10.632195    4471 fix.go:54] fixHost starting: 
	I0914 15:22:10.632321    4471 fix.go:102] recreateIfNeeded on kubernetes-upgrade-525000: state=Stopped err=<nil>
	W0914 15:22:10.632334    4471 fix.go:128] unexpected machine state, will restart: <nil>
	I0914 15:22:10.635630    4471 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-525000" ...
	I0914 15:22:10.643665    4471 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/kubernetes-upgrade-525000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/kubernetes-upgrade-525000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/kubernetes-upgrade-525000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:26:d8:39:f9:04 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/kubernetes-upgrade-525000/disk.qcow2
	I0914 15:22:10.645674    4471 main.go:141] libmachine: STDOUT: 
	I0914 15:22:10.645698    4471 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 15:22:10.645728    4471 fix.go:56] fixHost completed within 13.531708ms
	I0914 15:22:10.645734    4471 start.go:83] releasing machines lock for "kubernetes-upgrade-525000", held for 13.550167ms
	W0914 15:22:10.645741    4471 start.go:688] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 15:22:10.645792    4471 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 15:22:10.645797    4471 start.go:703] Will try again in 5 seconds ...
	I0914 15:22:15.647829    4471 start.go:365] acquiring machines lock for kubernetes-upgrade-525000: {Name:mk831f2bd5a9a0c932c8a59972319d68edb53d07 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 15:22:15.648309    4471 start.go:369] acquired machines lock for "kubernetes-upgrade-525000" in 370.875µs
	I0914 15:22:15.648497    4471 start.go:96] Skipping create...Using existing machine configuration
	I0914 15:22:15.648518    4471 fix.go:54] fixHost starting: 
	I0914 15:22:15.649393    4471 fix.go:102] recreateIfNeeded on kubernetes-upgrade-525000: state=Stopped err=<nil>
	W0914 15:22:15.649419    4471 fix.go:128] unexpected machine state, will restart: <nil>
	I0914 15:22:15.659004    4471 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-525000" ...
	I0914 15:22:15.663123    4471 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/kubernetes-upgrade-525000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/kubernetes-upgrade-525000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/kubernetes-upgrade-525000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:26:d8:39:f9:04 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/kubernetes-upgrade-525000/disk.qcow2
	I0914 15:22:15.672575    4471 main.go:141] libmachine: STDOUT: 
	I0914 15:22:15.672614    4471 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 15:22:15.672751    4471 fix.go:56] fixHost completed within 24.221792ms
	I0914 15:22:15.672770    4471 start.go:83] releasing machines lock for "kubernetes-upgrade-525000", held for 24.420709ms
	W0914 15:22:15.672922    4471 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-525000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-525000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 15:22:15.679900    4471 out.go:177] 
	W0914 15:22:15.683982    4471 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 15:22:15.684032    4471 out.go:239] * 
	* 
	W0914 15:22:15.686513    4471 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 15:22:15.695901    4471 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:258: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-525000 --memory=2200 --kubernetes-version=v1.28.1 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-525000 version --output=json
version_upgrade_test.go:261: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-525000 version --output=json: exit status 1 (64.139709ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-525000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:263: error running kubectl: exit status 1
panic.go:523: *** TestKubernetesUpgrade FAILED at 2023-09-14 15:22:15.773936 -0700 PDT m=+2802.985425209
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-525000 -n kubernetes-upgrade-525000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-525000 -n kubernetes-upgrade-525000: exit status 7 (34.037875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-525000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-525000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-525000
--- FAIL: TestKubernetesUpgrade (15.36s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.37s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.31.2 on darwin (arm64)
- MINIKUBE_LOCATION=17243
- KUBECONFIG=/Users/jenkins/minikube-integration/17243-1006/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2948282285/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.37s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.04s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.31.2 on darwin (arm64)
- MINIKUBE_LOCATION=17243
- KUBECONFIG=/Users/jenkins/minikube-integration/17243-1006/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current1487593297/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (162s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
version_upgrade_test.go:168: v1.6.2 release installation failed: bad response code: 404
--- FAIL: TestStoppedBinaryUpgrade/Setup (162.00s)

                                                
                                    
x
+
TestPause/serial/Start (9.93s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-350000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-350000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.863645209s)

                                                
                                                
-- stdout --
	* [pause-350000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17243
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17243-1006/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17243-1006/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node pause-350000 in cluster pause-350000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-350000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-350000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-350000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-350000 -n pause-350000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-350000 -n pause-350000: exit status 7 (67.919959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-350000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (9.93s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.83s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-345000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-345000 --driver=qemu2 : exit status 80 (9.757315333s)

                                                
                                                
-- stdout --
	* [NoKubernetes-345000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17243
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17243-1006/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17243-1006/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node NoKubernetes-345000 in cluster NoKubernetes-345000
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-345000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-345000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-345000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-345000 -n NoKubernetes-345000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-345000 -n NoKubernetes-345000: exit status 7 (66.948417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-345000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.83s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.47s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-345000 --no-kubernetes --driver=qemu2 
E0914 15:22:57.082278    1425 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/ingress-addon-legacy-438000/client.crt: no such file or directory
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-345000 --no-kubernetes --driver=qemu2 : exit status 80 (5.401276917s)

                                                
                                                
-- stdout --
	* [NoKubernetes-345000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17243
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17243-1006/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17243-1006/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-345000
	* Restarting existing qemu2 VM for "NoKubernetes-345000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-345000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-345000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-345000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-345000 -n NoKubernetes-345000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-345000 -n NoKubernetes-345000: exit status 7 (70.415667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-345000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.47s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.47s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-345000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-345000 --no-kubernetes --driver=qemu2 : exit status 80 (5.401808s)

                                                
                                                
-- stdout --
	* [NoKubernetes-345000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17243
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17243-1006/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17243-1006/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-345000
	* Restarting existing qemu2 VM for "NoKubernetes-345000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-345000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-345000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-345000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-345000 -n NoKubernetes-345000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-345000 -n NoKubernetes-345000: exit status 7 (72.024042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-345000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.47s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.47s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-345000 --driver=qemu2 
E0914 15:23:07.445049    1425 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/client.crt: no such file or directory
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-345000 --driver=qemu2 : exit status 80 (5.401073542s)

                                                
                                                
-- stdout --
	* [NoKubernetes-345000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17243
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17243-1006/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17243-1006/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-345000
	* Restarting existing qemu2 VM for "NoKubernetes-345000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-345000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-345000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-345000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-345000 -n NoKubernetes-345000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-345000 -n NoKubernetes-345000: exit status 7 (69.083084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-345000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (10.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-710000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-710000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (10.01481525s)

                                                
                                                
-- stdout --
	* [auto-710000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17243
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17243-1006/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17243-1006/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node auto-710000 in cluster auto-710000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-710000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 15:23:09.296987    4606 out.go:296] Setting OutFile to fd 1 ...
	I0914 15:23:09.297262    4606 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 15:23:09.297269    4606 out.go:309] Setting ErrFile to fd 2...
	I0914 15:23:09.297272    4606 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 15:23:09.297411    4606 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17243-1006/.minikube/bin
	I0914 15:23:09.298604    4606 out.go:303] Setting JSON to false
	I0914 15:23:09.314018    4606 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3163,"bootTime":1694727026,"procs":410,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0914 15:23:09.314085    4606 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0914 15:23:09.316799    4606 out.go:177] * [auto-710000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0914 15:23:09.324597    4606 out.go:177]   - MINIKUBE_LOCATION=17243
	I0914 15:23:09.324670    4606 notify.go:220] Checking for updates...
	I0914 15:23:09.331525    4606 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17243-1006/kubeconfig
	I0914 15:23:09.334575    4606 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 15:23:09.337457    4606 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 15:23:09.340532    4606 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17243-1006/.minikube
	I0914 15:23:09.343579    4606 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 15:23:09.345306    4606 config.go:182] Loaded profile config "multinode-463000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0914 15:23:09.345361    4606 driver.go:373] Setting default libvirt URI to qemu:///system
	I0914 15:23:09.349560    4606 out.go:177] * Using the qemu2 driver based on user configuration
	I0914 15:23:09.360519    4606 start.go:298] selected driver: qemu2
	I0914 15:23:09.360523    4606 start.go:902] validating driver "qemu2" against <nil>
	I0914 15:23:09.360528    4606 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 15:23:09.362609    4606 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0914 15:23:09.365502    4606 out.go:177] * Automatically selected the socket_vmnet network
	I0914 15:23:09.368656    4606 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 15:23:09.368678    4606 cni.go:84] Creating CNI manager for ""
	I0914 15:23:09.368694    4606 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0914 15:23:09.368698    4606 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0914 15:23:09.368704    4606 start_flags.go:321] config:
	{Name:auto-710000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:auto-710000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISoc
ket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0
AutoPauseInterval:1m0s}
	I0914 15:23:09.372929    4606 iso.go:125] acquiring lock: {Name:mkd5b68b252c18146a825ff5365883948b3c6983 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 15:23:09.380540    4606 out.go:177] * Starting control plane node auto-710000 in cluster auto-710000
	I0914 15:23:09.384558    4606 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0914 15:23:09.384578    4606 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0914 15:23:09.384592    4606 cache.go:57] Caching tarball of preloaded images
	I0914 15:23:09.384670    4606 preload.go:174] Found /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0914 15:23:09.384676    4606 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0914 15:23:09.384750    4606 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/auto-710000/config.json ...
	I0914 15:23:09.384763    4606 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/auto-710000/config.json: {Name:mke70e330310852043b49f4a92b73c8f202892d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 15:23:09.384996    4606 start.go:365] acquiring machines lock for auto-710000: {Name:mk831f2bd5a9a0c932c8a59972319d68edb53d07 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 15:23:09.385028    4606 start.go:369] acquired machines lock for "auto-710000" in 26.459µs
	I0914 15:23:09.385049    4606 start.go:93] Provisioning new machine with config: &{Name:auto-710000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.28.1 ClusterName:auto-710000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 15:23:09.385083    4606 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 15:23:09.389541    4606 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0914 15:23:09.406274    4606 start.go:159] libmachine.API.Create for "auto-710000" (driver="qemu2")
	I0914 15:23:09.406295    4606 client.go:168] LocalClient.Create starting
	I0914 15:23:09.406352    4606 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/ca.pem
	I0914 15:23:09.406377    4606 main.go:141] libmachine: Decoding PEM data...
	I0914 15:23:09.406389    4606 main.go:141] libmachine: Parsing certificate...
	I0914 15:23:09.406432    4606 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/cert.pem
	I0914 15:23:09.406451    4606 main.go:141] libmachine: Decoding PEM data...
	I0914 15:23:09.406460    4606 main.go:141] libmachine: Parsing certificate...
	I0914 15:23:09.406815    4606 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17243-1006/.minikube/cache/iso/arm64/minikube-v1.31.0-1694625400-17243-arm64.iso...
	I0914 15:23:09.531860    4606 main.go:141] libmachine: Creating SSH key...
	I0914 15:23:09.764012    4606 main.go:141] libmachine: Creating Disk image...
	I0914 15:23:09.764022    4606 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 15:23:09.764178    4606 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/auto-710000/disk.qcow2.raw /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/auto-710000/disk.qcow2
	I0914 15:23:09.772803    4606 main.go:141] libmachine: STDOUT: 
	I0914 15:23:09.772820    4606 main.go:141] libmachine: STDERR: 
	I0914 15:23:09.772883    4606 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/auto-710000/disk.qcow2 +20000M
	I0914 15:23:09.780040    4606 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 15:23:09.780061    4606 main.go:141] libmachine: STDERR: 
	I0914 15:23:09.780083    4606 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/auto-710000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/auto-710000/disk.qcow2
	I0914 15:23:09.780088    4606 main.go:141] libmachine: Starting QEMU VM...
	I0914 15:23:09.780126    4606 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/auto-710000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/auto-710000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/auto-710000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:e4:1f:92:c1:6e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/auto-710000/disk.qcow2
	I0914 15:23:09.781631    4606 main.go:141] libmachine: STDOUT: 
	I0914 15:23:09.781644    4606 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 15:23:09.781661    4606 client.go:171] LocalClient.Create took 375.370834ms
	I0914 15:23:11.783797    4606 start.go:128] duration metric: createHost completed in 2.398742542s
	I0914 15:23:11.783898    4606 start.go:83] releasing machines lock for "auto-710000", held for 2.398911667s
	W0914 15:23:11.783963    4606 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 15:23:11.792300    4606 out.go:177] * Deleting "auto-710000" in qemu2 ...
	W0914 15:23:11.816966    4606 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 15:23:11.816997    4606 start.go:703] Will try again in 5 seconds ...
	I0914 15:23:16.819080    4606 start.go:365] acquiring machines lock for auto-710000: {Name:mk831f2bd5a9a0c932c8a59972319d68edb53d07 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 15:23:16.819670    4606 start.go:369] acquired machines lock for "auto-710000" in 448.084µs
	I0914 15:23:16.819862    4606 start.go:93] Provisioning new machine with config: &{Name:auto-710000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.28.1 ClusterName:auto-710000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 15:23:16.820205    4606 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 15:23:16.825017    4606 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0914 15:23:16.872425    4606 start.go:159] libmachine.API.Create for "auto-710000" (driver="qemu2")
	I0914 15:23:16.872479    4606 client.go:168] LocalClient.Create starting
	I0914 15:23:16.872611    4606 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/ca.pem
	I0914 15:23:16.872679    4606 main.go:141] libmachine: Decoding PEM data...
	I0914 15:23:16.872698    4606 main.go:141] libmachine: Parsing certificate...
	I0914 15:23:16.872768    4606 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/cert.pem
	I0914 15:23:16.872805    4606 main.go:141] libmachine: Decoding PEM data...
	I0914 15:23:16.872817    4606 main.go:141] libmachine: Parsing certificate...
	I0914 15:23:16.873357    4606 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17243-1006/.minikube/cache/iso/arm64/minikube-v1.31.0-1694625400-17243-arm64.iso...
	I0914 15:23:17.002105    4606 main.go:141] libmachine: Creating SSH key...
	I0914 15:23:17.223705    4606 main.go:141] libmachine: Creating Disk image...
	I0914 15:23:17.223717    4606 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 15:23:17.223848    4606 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/auto-710000/disk.qcow2.raw /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/auto-710000/disk.qcow2
	I0914 15:23:17.232656    4606 main.go:141] libmachine: STDOUT: 
	I0914 15:23:17.232669    4606 main.go:141] libmachine: STDERR: 
	I0914 15:23:17.232728    4606 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/auto-710000/disk.qcow2 +20000M
	I0914 15:23:17.239871    4606 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 15:23:17.239882    4606 main.go:141] libmachine: STDERR: 
	I0914 15:23:17.239897    4606 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/auto-710000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/auto-710000/disk.qcow2
	I0914 15:23:17.239902    4606 main.go:141] libmachine: Starting QEMU VM...
	I0914 15:23:17.239946    4606 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/auto-710000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/auto-710000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/auto-710000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:55:59:df:d9:77 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/auto-710000/disk.qcow2
	I0914 15:23:17.241428    4606 main.go:141] libmachine: STDOUT: 
	I0914 15:23:17.241450    4606 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 15:23:17.241463    4606 client.go:171] LocalClient.Create took 368.986417ms
	I0914 15:23:19.243593    4606 start.go:128] duration metric: createHost completed in 2.423410041s
	I0914 15:23:19.243670    4606 start.go:83] releasing machines lock for "auto-710000", held for 2.423992375s
	W0914 15:23:19.244107    4606 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p auto-710000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-710000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 15:23:19.252731    4606 out.go:177] 
	W0914 15:23:19.257800    4606 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 15:23:19.257838    4606 out.go:239] * 
	* 
	W0914 15:23:19.260384    4606 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 15:23:19.269778    4606 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (10.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-710000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-710000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.727159417s)

                                                
                                                
-- stdout --
	* [kindnet-710000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17243
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17243-1006/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17243-1006/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node kindnet-710000 in cluster kindnet-710000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-710000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 15:23:21.396158    4716 out.go:296] Setting OutFile to fd 1 ...
	I0914 15:23:21.396284    4716 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 15:23:21.396287    4716 out.go:309] Setting ErrFile to fd 2...
	I0914 15:23:21.396290    4716 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 15:23:21.396419    4716 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17243-1006/.minikube/bin
	I0914 15:23:21.397514    4716 out.go:303] Setting JSON to false
	I0914 15:23:21.412503    4716 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3175,"bootTime":1694727026,"procs":410,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0914 15:23:21.412560    4716 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0914 15:23:21.417922    4716 out.go:177] * [kindnet-710000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0914 15:23:21.425808    4716 out.go:177]   - MINIKUBE_LOCATION=17243
	I0914 15:23:21.425863    4716 notify.go:220] Checking for updates...
	I0914 15:23:21.428843    4716 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17243-1006/kubeconfig
	I0914 15:23:21.431895    4716 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 15:23:21.434812    4716 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 15:23:21.437806    4716 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17243-1006/.minikube
	I0914 15:23:21.440849    4716 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 15:23:21.444152    4716 config.go:182] Loaded profile config "multinode-463000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0914 15:23:21.444203    4716 driver.go:373] Setting default libvirt URI to qemu:///system
	I0914 15:23:21.448841    4716 out.go:177] * Using the qemu2 driver based on user configuration
	I0914 15:23:21.455816    4716 start.go:298] selected driver: qemu2
	I0914 15:23:21.455821    4716 start.go:902] validating driver "qemu2" against <nil>
	I0914 15:23:21.455827    4716 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 15:23:21.457756    4716 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0914 15:23:21.460820    4716 out.go:177] * Automatically selected the socket_vmnet network
	I0914 15:23:21.463896    4716 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 15:23:21.463918    4716 cni.go:84] Creating CNI manager for "kindnet"
	I0914 15:23:21.463924    4716 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0914 15:23:21.463930    4716 start_flags.go:321] config:
	{Name:kindnet-710000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:kindnet-710000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: S
SHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 15:23:21.468017    4716 iso.go:125] acquiring lock: {Name:mkd5b68b252c18146a825ff5365883948b3c6983 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 15:23:21.474822    4716 out.go:177] * Starting control plane node kindnet-710000 in cluster kindnet-710000
	I0914 15:23:21.478791    4716 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0914 15:23:21.478808    4716 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0914 15:23:21.478817    4716 cache.go:57] Caching tarball of preloaded images
	I0914 15:23:21.478885    4716 preload.go:174] Found /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0914 15:23:21.478891    4716 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0914 15:23:21.478950    4716 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/kindnet-710000/config.json ...
	I0914 15:23:21.478962    4716 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/kindnet-710000/config.json: {Name:mk4205fb69205b0cff9ef247e4200bcaa355c4e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 15:23:21.479174    4716 start.go:365] acquiring machines lock for kindnet-710000: {Name:mk831f2bd5a9a0c932c8a59972319d68edb53d07 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 15:23:21.479210    4716 start.go:369] acquired machines lock for "kindnet-710000" in 29.542µs
	I0914 15:23:21.479224    4716 start.go:93] Provisioning new machine with config: &{Name:kindnet-710000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.1 ClusterName:kindnet-710000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 15:23:21.479271    4716 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 15:23:21.487681    4716 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0914 15:23:21.504504    4716 start.go:159] libmachine.API.Create for "kindnet-710000" (driver="qemu2")
	I0914 15:23:21.504527    4716 client.go:168] LocalClient.Create starting
	I0914 15:23:21.504578    4716 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/ca.pem
	I0914 15:23:21.504606    4716 main.go:141] libmachine: Decoding PEM data...
	I0914 15:23:21.504622    4716 main.go:141] libmachine: Parsing certificate...
	I0914 15:23:21.504664    4716 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/cert.pem
	I0914 15:23:21.504683    4716 main.go:141] libmachine: Decoding PEM data...
	I0914 15:23:21.504693    4716 main.go:141] libmachine: Parsing certificate...
	I0914 15:23:21.505017    4716 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17243-1006/.minikube/cache/iso/arm64/minikube-v1.31.0-1694625400-17243-arm64.iso...
	I0914 15:23:21.620477    4716 main.go:141] libmachine: Creating SSH key...
	I0914 15:23:21.717212    4716 main.go:141] libmachine: Creating Disk image...
	I0914 15:23:21.717221    4716 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 15:23:21.717351    4716 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/kindnet-710000/disk.qcow2.raw /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/kindnet-710000/disk.qcow2
	I0914 15:23:21.725923    4716 main.go:141] libmachine: STDOUT: 
	I0914 15:23:21.725939    4716 main.go:141] libmachine: STDERR: 
	I0914 15:23:21.725995    4716 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/kindnet-710000/disk.qcow2 +20000M
	I0914 15:23:21.733076    4716 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 15:23:21.733088    4716 main.go:141] libmachine: STDERR: 
	I0914 15:23:21.733106    4716 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/kindnet-710000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/kindnet-710000/disk.qcow2
	I0914 15:23:21.733114    4716 main.go:141] libmachine: Starting QEMU VM...
	I0914 15:23:21.733144    4716 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/kindnet-710000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/kindnet-710000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/kindnet-710000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:17:b1:15:0a:ed -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/kindnet-710000/disk.qcow2
	I0914 15:23:21.734634    4716 main.go:141] libmachine: STDOUT: 
	I0914 15:23:21.734647    4716 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 15:23:21.734666    4716 client.go:171] LocalClient.Create took 230.137708ms
	I0914 15:23:23.736878    4716 start.go:128] duration metric: createHost completed in 2.257623375s
	I0914 15:23:23.736962    4716 start.go:83] releasing machines lock for "kindnet-710000", held for 2.2577915s
	W0914 15:23:23.737035    4716 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 15:23:23.747275    4716 out.go:177] * Deleting "kindnet-710000" in qemu2 ...
	W0914 15:23:23.765096    4716 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 15:23:23.765123    4716 start.go:703] Will try again in 5 seconds ...
	I0914 15:23:28.767316    4716 start.go:365] acquiring machines lock for kindnet-710000: {Name:mk831f2bd5a9a0c932c8a59972319d68edb53d07 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 15:23:28.767831    4716 start.go:369] acquired machines lock for "kindnet-710000" in 403.583µs
	I0914 15:23:28.767980    4716 start.go:93] Provisioning new machine with config: &{Name:kindnet-710000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.1 ClusterName:kindnet-710000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 15:23:28.768269    4716 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 15:23:28.775958    4716 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0914 15:23:28.822320    4716 start.go:159] libmachine.API.Create for "kindnet-710000" (driver="qemu2")
	I0914 15:23:28.822362    4716 client.go:168] LocalClient.Create starting
	I0914 15:23:28.822485    4716 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/ca.pem
	I0914 15:23:28.822553    4716 main.go:141] libmachine: Decoding PEM data...
	I0914 15:23:28.822568    4716 main.go:141] libmachine: Parsing certificate...
	I0914 15:23:28.822634    4716 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/cert.pem
	I0914 15:23:28.822671    4716 main.go:141] libmachine: Decoding PEM data...
	I0914 15:23:28.822693    4716 main.go:141] libmachine: Parsing certificate...
	I0914 15:23:28.823189    4716 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17243-1006/.minikube/cache/iso/arm64/minikube-v1.31.0-1694625400-17243-arm64.iso...
	I0914 15:23:28.949947    4716 main.go:141] libmachine: Creating SSH key...
	I0914 15:23:29.035397    4716 main.go:141] libmachine: Creating Disk image...
	I0914 15:23:29.035406    4716 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 15:23:29.035539    4716 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/kindnet-710000/disk.qcow2.raw /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/kindnet-710000/disk.qcow2
	I0914 15:23:29.043809    4716 main.go:141] libmachine: STDOUT: 
	I0914 15:23:29.043824    4716 main.go:141] libmachine: STDERR: 
	I0914 15:23:29.043876    4716 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/kindnet-710000/disk.qcow2 +20000M
	I0914 15:23:29.050976    4716 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 15:23:29.050997    4716 main.go:141] libmachine: STDERR: 
	I0914 15:23:29.051012    4716 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/kindnet-710000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/kindnet-710000/disk.qcow2
	I0914 15:23:29.051020    4716 main.go:141] libmachine: Starting QEMU VM...
	I0914 15:23:29.051068    4716 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/kindnet-710000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/kindnet-710000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/kindnet-710000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:ab:09:52:67:91 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/kindnet-710000/disk.qcow2
	I0914 15:23:29.052552    4716 main.go:141] libmachine: STDOUT: 
	I0914 15:23:29.052565    4716 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 15:23:29.052579    4716 client.go:171] LocalClient.Create took 230.212333ms
	I0914 15:23:31.054716    4716 start.go:128] duration metric: createHost completed in 2.286467667s
	I0914 15:23:31.054783    4716 start.go:83] releasing machines lock for "kindnet-710000", held for 2.286976125s
	W0914 15:23:31.055184    4716 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-710000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-710000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 15:23:31.064949    4716 out.go:177] 
	W0914 15:23:31.069944    4716 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 15:23:31.070021    4716 out.go:239] * 
	* 
	W0914 15:23:31.073220    4716 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 15:23:31.081857    4716 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-710000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-710000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.685385917s)

                                                
                                                
-- stdout --
	* [calico-710000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17243
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17243-1006/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17243-1006/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node calico-710000 in cluster calico-710000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-710000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 15:23:33.316489    4831 out.go:296] Setting OutFile to fd 1 ...
	I0914 15:23:33.316596    4831 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 15:23:33.316599    4831 out.go:309] Setting ErrFile to fd 2...
	I0914 15:23:33.316601    4831 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 15:23:33.316716    4831 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17243-1006/.minikube/bin
	I0914 15:23:33.317771    4831 out.go:303] Setting JSON to false
	I0914 15:23:33.332822    4831 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3187,"bootTime":1694727026,"procs":409,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0914 15:23:33.332901    4831 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0914 15:23:33.337949    4831 out.go:177] * [calico-710000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0914 15:23:33.344993    4831 out.go:177]   - MINIKUBE_LOCATION=17243
	I0914 15:23:33.348951    4831 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17243-1006/kubeconfig
	I0914 15:23:33.345073    4831 notify.go:220] Checking for updates...
	I0914 15:23:33.351810    4831 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 15:23:33.355911    4831 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 15:23:33.358981    4831 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17243-1006/.minikube
	I0914 15:23:33.361917    4831 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 15:23:33.365314    4831 config.go:182] Loaded profile config "multinode-463000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0914 15:23:33.365366    4831 driver.go:373] Setting default libvirt URI to qemu:///system
	I0914 15:23:33.369901    4831 out.go:177] * Using the qemu2 driver based on user configuration
	I0914 15:23:33.376932    4831 start.go:298] selected driver: qemu2
	I0914 15:23:33.376936    4831 start.go:902] validating driver "qemu2" against <nil>
	I0914 15:23:33.376942    4831 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 15:23:33.378902    4831 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0914 15:23:33.381928    4831 out.go:177] * Automatically selected the socket_vmnet network
	I0914 15:23:33.385043    4831 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 15:23:33.385083    4831 cni.go:84] Creating CNI manager for "calico"
	I0914 15:23:33.385088    4831 start_flags.go:316] Found "Calico" CNI - setting NetworkPlugin=cni
	I0914 15:23:33.385095    4831 start_flags.go:321] config:
	{Name:calico-710000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:calico-710000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHA
gentPID:0 AutoPauseInterval:1m0s}
	I0914 15:23:33.389161    4831 iso.go:125] acquiring lock: {Name:mkd5b68b252c18146a825ff5365883948b3c6983 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 15:23:33.395987    4831 out.go:177] * Starting control plane node calico-710000 in cluster calico-710000
	I0914 15:23:33.399898    4831 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0914 15:23:33.399916    4831 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0914 15:23:33.399929    4831 cache.go:57] Caching tarball of preloaded images
	I0914 15:23:33.400006    4831 preload.go:174] Found /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0914 15:23:33.400012    4831 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0914 15:23:33.400085    4831 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/calico-710000/config.json ...
	I0914 15:23:33.400097    4831 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/calico-710000/config.json: {Name:mk291122ee5d5a3765b9e73a12448ee56c26b461 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 15:23:33.400301    4831 start.go:365] acquiring machines lock for calico-710000: {Name:mk831f2bd5a9a0c932c8a59972319d68edb53d07 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 15:23:33.400331    4831 start.go:369] acquired machines lock for "calico-710000" in 23.791µs
	I0914 15:23:33.400344    4831 start.go:93] Provisioning new machine with config: &{Name:calico-710000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.1 ClusterName:calico-710000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 15:23:33.400375    4831 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 15:23:33.408904    4831 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0914 15:23:33.424133    4831 start.go:159] libmachine.API.Create for "calico-710000" (driver="qemu2")
	I0914 15:23:33.424157    4831 client.go:168] LocalClient.Create starting
	I0914 15:23:33.424226    4831 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/ca.pem
	I0914 15:23:33.424253    4831 main.go:141] libmachine: Decoding PEM data...
	I0914 15:23:33.424262    4831 main.go:141] libmachine: Parsing certificate...
	I0914 15:23:33.424301    4831 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/cert.pem
	I0914 15:23:33.424319    4831 main.go:141] libmachine: Decoding PEM data...
	I0914 15:23:33.424329    4831 main.go:141] libmachine: Parsing certificate...
	I0914 15:23:33.424659    4831 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17243-1006/.minikube/cache/iso/arm64/minikube-v1.31.0-1694625400-17243-arm64.iso...
	I0914 15:23:33.537179    4831 main.go:141] libmachine: Creating SSH key...
	I0914 15:23:33.637932    4831 main.go:141] libmachine: Creating Disk image...
	I0914 15:23:33.637938    4831 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 15:23:33.638083    4831 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/calico-710000/disk.qcow2.raw /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/calico-710000/disk.qcow2
	I0914 15:23:33.646641    4831 main.go:141] libmachine: STDOUT: 
	I0914 15:23:33.646656    4831 main.go:141] libmachine: STDERR: 
	I0914 15:23:33.646721    4831 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/calico-710000/disk.qcow2 +20000M
	I0914 15:23:33.654942    4831 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 15:23:33.654957    4831 main.go:141] libmachine: STDERR: 
	I0914 15:23:33.654972    4831 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/calico-710000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/calico-710000/disk.qcow2
	I0914 15:23:33.654983    4831 main.go:141] libmachine: Starting QEMU VM...
	I0914 15:23:33.655023    4831 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/calico-710000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/calico-710000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/calico-710000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:c0:7a:ed:83:e4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/calico-710000/disk.qcow2
	I0914 15:23:33.656670    4831 main.go:141] libmachine: STDOUT: 
	I0914 15:23:33.656682    4831 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 15:23:33.656698    4831 client.go:171] LocalClient.Create took 232.541333ms
	I0914 15:23:35.658872    4831 start.go:128] duration metric: createHost completed in 2.258521792s
	I0914 15:23:35.658930    4831 start.go:83] releasing machines lock for "calico-710000", held for 2.258639042s
	W0914 15:23:35.658989    4831 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 15:23:35.667265    4831 out.go:177] * Deleting "calico-710000" in qemu2 ...
	W0914 15:23:35.686568    4831 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 15:23:35.686599    4831 start.go:703] Will try again in 5 seconds ...
	I0914 15:23:40.688769    4831 start.go:365] acquiring machines lock for calico-710000: {Name:mk831f2bd5a9a0c932c8a59972319d68edb53d07 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 15:23:40.689248    4831 start.go:369] acquired machines lock for "calico-710000" in 375.5µs
	I0914 15:23:40.689364    4831 start.go:93] Provisioning new machine with config: &{Name:calico-710000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.1 ClusterName:calico-710000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 15:23:40.689650    4831 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 15:23:40.693441    4831 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0914 15:23:40.737138    4831 start.go:159] libmachine.API.Create for "calico-710000" (driver="qemu2")
	I0914 15:23:40.737173    4831 client.go:168] LocalClient.Create starting
	I0914 15:23:40.737287    4831 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/ca.pem
	I0914 15:23:40.737350    4831 main.go:141] libmachine: Decoding PEM data...
	I0914 15:23:40.737371    4831 main.go:141] libmachine: Parsing certificate...
	I0914 15:23:40.737445    4831 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/cert.pem
	I0914 15:23:40.737486    4831 main.go:141] libmachine: Decoding PEM data...
	I0914 15:23:40.737516    4831 main.go:141] libmachine: Parsing certificate...
	I0914 15:23:40.738018    4831 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17243-1006/.minikube/cache/iso/arm64/minikube-v1.31.0-1694625400-17243-arm64.iso...
	I0914 15:23:40.865428    4831 main.go:141] libmachine: Creating SSH key...
	I0914 15:23:40.914066    4831 main.go:141] libmachine: Creating Disk image...
	I0914 15:23:40.914075    4831 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 15:23:40.914218    4831 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/calico-710000/disk.qcow2.raw /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/calico-710000/disk.qcow2
	I0914 15:23:40.922659    4831 main.go:141] libmachine: STDOUT: 
	I0914 15:23:40.922675    4831 main.go:141] libmachine: STDERR: 
	I0914 15:23:40.922727    4831 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/calico-710000/disk.qcow2 +20000M
	I0914 15:23:40.929821    4831 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 15:23:40.929847    4831 main.go:141] libmachine: STDERR: 
	I0914 15:23:40.929861    4831 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/calico-710000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/calico-710000/disk.qcow2
	I0914 15:23:40.929871    4831 main.go:141] libmachine: Starting QEMU VM...
	I0914 15:23:40.929904    4831 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/calico-710000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/calico-710000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/calico-710000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:55:5b:dc:3f:e5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/calico-710000/disk.qcow2
	I0914 15:23:40.931450    4831 main.go:141] libmachine: STDOUT: 
	I0914 15:23:40.931464    4831 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 15:23:40.931479    4831 client.go:171] LocalClient.Create took 194.305333ms
	I0914 15:23:42.933641    4831 start.go:128] duration metric: createHost completed in 2.243993625s
	I0914 15:23:42.933737    4831 start.go:83] releasing machines lock for "calico-710000", held for 2.244514125s
	W0914 15:23:42.934248    4831 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p calico-710000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-710000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 15:23:42.944993    4831 out.go:177] 
	W0914 15:23:42.948062    4831 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 15:23:42.948092    4831 out.go:239] * 
	* 
	W0914 15:23:42.950777    4831 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 15:23:42.961000    4831 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (10.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-710000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-710000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (10.003243167s)

                                                
                                                
-- stdout --
	* [custom-flannel-710000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17243
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17243-1006/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17243-1006/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node custom-flannel-710000 in cluster custom-flannel-710000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-710000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 15:23:45.350167    4955 out.go:296] Setting OutFile to fd 1 ...
	I0914 15:23:45.350272    4955 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 15:23:45.350275    4955 out.go:309] Setting ErrFile to fd 2...
	I0914 15:23:45.350278    4955 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 15:23:45.350420    4955 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17243-1006/.minikube/bin
	I0914 15:23:45.351409    4955 out.go:303] Setting JSON to false
	I0914 15:23:45.366460    4955 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3199,"bootTime":1694727026,"procs":409,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0914 15:23:45.366553    4955 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0914 15:23:45.370717    4955 out.go:177] * [custom-flannel-710000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0914 15:23:45.378582    4955 out.go:177]   - MINIKUBE_LOCATION=17243
	I0914 15:23:45.382443    4955 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17243-1006/kubeconfig
	I0914 15:23:45.378636    4955 notify.go:220] Checking for updates...
	I0914 15:23:45.385501    4955 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 15:23:45.388571    4955 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 15:23:45.391603    4955 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17243-1006/.minikube
	I0914 15:23:45.394645    4955 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 15:23:45.397887    4955 config.go:182] Loaded profile config "multinode-463000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0914 15:23:45.397939    4955 driver.go:373] Setting default libvirt URI to qemu:///system
	I0914 15:23:45.401568    4955 out.go:177] * Using the qemu2 driver based on user configuration
	I0914 15:23:45.408594    4955 start.go:298] selected driver: qemu2
	I0914 15:23:45.408599    4955 start.go:902] validating driver "qemu2" against <nil>
	I0914 15:23:45.408605    4955 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 15:23:45.410541    4955 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0914 15:23:45.413542    4955 out.go:177] * Automatically selected the socket_vmnet network
	I0914 15:23:45.416667    4955 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 15:23:45.416696    4955 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0914 15:23:45.416708    4955 start_flags.go:316] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0914 15:23:45.416716    4955 start_flags.go:321] config:
	{Name:custom-flannel-710000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:custom-flannel-710000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/sock
et_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 15:23:45.420908    4955 iso.go:125] acquiring lock: {Name:mkd5b68b252c18146a825ff5365883948b3c6983 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 15:23:45.427397    4955 out.go:177] * Starting control plane node custom-flannel-710000 in cluster custom-flannel-710000
	I0914 15:23:45.431575    4955 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0914 15:23:45.431594    4955 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0914 15:23:45.431609    4955 cache.go:57] Caching tarball of preloaded images
	I0914 15:23:45.431707    4955 preload.go:174] Found /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0914 15:23:45.431719    4955 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0914 15:23:45.431799    4955 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/custom-flannel-710000/config.json ...
	I0914 15:23:45.431819    4955 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/custom-flannel-710000/config.json: {Name:mkd939c5e2556a4bfb68fb9bdaccaeb3d928ddc4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 15:23:45.432043    4955 start.go:365] acquiring machines lock for custom-flannel-710000: {Name:mk831f2bd5a9a0c932c8a59972319d68edb53d07 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 15:23:45.432080    4955 start.go:369] acquired machines lock for "custom-flannel-710000" in 26.042µs
	I0914 15:23:45.432095    4955 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-710000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.28.1 ClusterName:custom-flannel-710000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 15:23:45.432126    4955 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 15:23:45.440577    4955 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0914 15:23:45.456925    4955 start.go:159] libmachine.API.Create for "custom-flannel-710000" (driver="qemu2")
	I0914 15:23:45.456949    4955 client.go:168] LocalClient.Create starting
	I0914 15:23:45.457008    4955 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/ca.pem
	I0914 15:23:45.457032    4955 main.go:141] libmachine: Decoding PEM data...
	I0914 15:23:45.457052    4955 main.go:141] libmachine: Parsing certificate...
	I0914 15:23:45.457098    4955 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/cert.pem
	I0914 15:23:45.457118    4955 main.go:141] libmachine: Decoding PEM data...
	I0914 15:23:45.457127    4955 main.go:141] libmachine: Parsing certificate...
	I0914 15:23:45.457498    4955 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17243-1006/.minikube/cache/iso/arm64/minikube-v1.31.0-1694625400-17243-arm64.iso...
	I0914 15:23:45.569993    4955 main.go:141] libmachine: Creating SSH key...
	I0914 15:23:45.864338    4955 main.go:141] libmachine: Creating Disk image...
	I0914 15:23:45.864351    4955 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 15:23:45.864546    4955 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/custom-flannel-710000/disk.qcow2.raw /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/custom-flannel-710000/disk.qcow2
	I0914 15:23:45.873713    4955 main.go:141] libmachine: STDOUT: 
	I0914 15:23:45.873728    4955 main.go:141] libmachine: STDERR: 
	I0914 15:23:45.873792    4955 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/custom-flannel-710000/disk.qcow2 +20000M
	I0914 15:23:45.880918    4955 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 15:23:45.880931    4955 main.go:141] libmachine: STDERR: 
	I0914 15:23:45.880950    4955 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/custom-flannel-710000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/custom-flannel-710000/disk.qcow2
	I0914 15:23:45.880958    4955 main.go:141] libmachine: Starting QEMU VM...
	I0914 15:23:45.880999    4955 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/custom-flannel-710000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/custom-flannel-710000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/custom-flannel-710000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:6a:27:fb:97:ed -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/custom-flannel-710000/disk.qcow2
	I0914 15:23:45.882481    4955 main.go:141] libmachine: STDOUT: 
	I0914 15:23:45.882495    4955 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 15:23:45.882513    4955 client.go:171] LocalClient.Create took 425.568042ms
	I0914 15:23:47.884696    4955 start.go:128] duration metric: createHost completed in 2.452598417s
	I0914 15:23:47.884817    4955 start.go:83] releasing machines lock for "custom-flannel-710000", held for 2.452722833s
	W0914 15:23:47.884888    4955 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 15:23:47.895238    4955 out.go:177] * Deleting "custom-flannel-710000" in qemu2 ...
	W0914 15:23:47.915272    4955 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 15:23:47.915309    4955 start.go:703] Will try again in 5 seconds ...
	I0914 15:23:52.917443    4955 start.go:365] acquiring machines lock for custom-flannel-710000: {Name:mk831f2bd5a9a0c932c8a59972319d68edb53d07 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 15:23:52.917908    4955 start.go:369] acquired machines lock for "custom-flannel-710000" in 350.25µs
	I0914 15:23:52.918031    4955 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-710000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.28.1 ClusterName:custom-flannel-710000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 15:23:52.918248    4955 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 15:23:52.926809    4955 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0914 15:23:52.973482    4955 start.go:159] libmachine.API.Create for "custom-flannel-710000" (driver="qemu2")
	I0914 15:23:52.973522    4955 client.go:168] LocalClient.Create starting
	I0914 15:23:52.973648    4955 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/ca.pem
	I0914 15:23:52.973709    4955 main.go:141] libmachine: Decoding PEM data...
	I0914 15:23:52.973734    4955 main.go:141] libmachine: Parsing certificate...
	I0914 15:23:52.973806    4955 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/cert.pem
	I0914 15:23:52.973841    4955 main.go:141] libmachine: Decoding PEM data...
	I0914 15:23:52.973861    4955 main.go:141] libmachine: Parsing certificate...
	I0914 15:23:52.974365    4955 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17243-1006/.minikube/cache/iso/arm64/minikube-v1.31.0-1694625400-17243-arm64.iso...
	I0914 15:23:53.099862    4955 main.go:141] libmachine: Creating SSH key...
	I0914 15:23:53.266172    4955 main.go:141] libmachine: Creating Disk image...
	I0914 15:23:53.266180    4955 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 15:23:53.266319    4955 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/custom-flannel-710000/disk.qcow2.raw /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/custom-flannel-710000/disk.qcow2
	I0914 15:23:53.274947    4955 main.go:141] libmachine: STDOUT: 
	I0914 15:23:53.274963    4955 main.go:141] libmachine: STDERR: 
	I0914 15:23:53.275012    4955 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/custom-flannel-710000/disk.qcow2 +20000M
	I0914 15:23:53.282096    4955 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 15:23:53.282109    4955 main.go:141] libmachine: STDERR: 
	I0914 15:23:53.282123    4955 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/custom-flannel-710000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/custom-flannel-710000/disk.qcow2
	I0914 15:23:53.282131    4955 main.go:141] libmachine: Starting QEMU VM...
	I0914 15:23:53.282175    4955 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/custom-flannel-710000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/custom-flannel-710000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/custom-flannel-710000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:0b:46:cf:77:d2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/custom-flannel-710000/disk.qcow2
	I0914 15:23:53.283669    4955 main.go:141] libmachine: STDOUT: 
	I0914 15:23:53.283688    4955 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 15:23:53.283700    4955 client.go:171] LocalClient.Create took 310.180458ms
	I0914 15:23:55.285832    4955 start.go:128] duration metric: createHost completed in 2.367590375s
	I0914 15:23:55.285905    4955 start.go:83] releasing machines lock for "custom-flannel-710000", held for 2.368023292s
	W0914 15:23:55.286313    4955 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-710000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-710000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 15:23:55.295075    4955 out.go:177] 
	W0914 15:23:55.300093    4955 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 15:23:55.300128    4955 out.go:239] * 
	* 
	W0914 15:23:55.302548    4955 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 15:23:55.311047    4955 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (10.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-710000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-710000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.7050855s)

                                                
                                                
-- stdout --
	* [false-710000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17243
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17243-1006/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17243-1006/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node false-710000 in cluster false-710000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-710000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 15:23:57.681572    5075 out.go:296] Setting OutFile to fd 1 ...
	I0914 15:23:57.681676    5075 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 15:23:57.681680    5075 out.go:309] Setting ErrFile to fd 2...
	I0914 15:23:57.681690    5075 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 15:23:57.681814    5075 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17243-1006/.minikube/bin
	I0914 15:23:57.682859    5075 out.go:303] Setting JSON to false
	I0914 15:23:57.697710    5075 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3211,"bootTime":1694727026,"procs":411,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0914 15:23:57.697795    5075 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0914 15:23:57.703329    5075 out.go:177] * [false-710000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0914 15:23:57.711482    5075 out.go:177]   - MINIKUBE_LOCATION=17243
	I0914 15:23:57.715439    5075 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17243-1006/kubeconfig
	I0914 15:23:57.711520    5075 notify.go:220] Checking for updates...
	I0914 15:23:57.719455    5075 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 15:23:57.722474    5075 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 15:23:57.726439    5075 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17243-1006/.minikube
	I0914 15:23:57.729420    5075 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 15:23:57.732852    5075 config.go:182] Loaded profile config "multinode-463000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0914 15:23:57.732897    5075 driver.go:373] Setting default libvirt URI to qemu:///system
	I0914 15:23:57.737404    5075 out.go:177] * Using the qemu2 driver based on user configuration
	I0914 15:23:57.744494    5075 start.go:298] selected driver: qemu2
	I0914 15:23:57.744498    5075 start.go:902] validating driver "qemu2" against <nil>
	I0914 15:23:57.744503    5075 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 15:23:57.746442    5075 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0914 15:23:57.750380    5075 out.go:177] * Automatically selected the socket_vmnet network
	I0914 15:23:57.753501    5075 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 15:23:57.753529    5075 cni.go:84] Creating CNI manager for "false"
	I0914 15:23:57.753534    5075 start_flags.go:321] config:
	{Name:false-710000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:false-710000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPI
D:0 AutoPauseInterval:1m0s}
	I0914 15:23:57.757619    5075 iso.go:125] acquiring lock: {Name:mkd5b68b252c18146a825ff5365883948b3c6983 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 15:23:57.764460    5075 out.go:177] * Starting control plane node false-710000 in cluster false-710000
	I0914 15:23:57.768450    5075 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0914 15:23:57.768474    5075 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0914 15:23:57.768488    5075 cache.go:57] Caching tarball of preloaded images
	I0914 15:23:57.768549    5075 preload.go:174] Found /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0914 15:23:57.768556    5075 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0914 15:23:57.768623    5075 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/false-710000/config.json ...
	I0914 15:23:57.768635    5075 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/false-710000/config.json: {Name:mk8d42bd3c123da213bafe699e2091c0ef103e6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 15:23:57.768853    5075 start.go:365] acquiring machines lock for false-710000: {Name:mk831f2bd5a9a0c932c8a59972319d68edb53d07 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 15:23:57.768882    5075 start.go:369] acquired machines lock for "false-710000" in 23.583µs
	I0914 15:23:57.768893    5075 start.go:93] Provisioning new machine with config: &{Name:false-710000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.1 ClusterName:false-710000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 15:23:57.768923    5075 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 15:23:57.773462    5075 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0914 15:23:57.789241    5075 start.go:159] libmachine.API.Create for "false-710000" (driver="qemu2")
	I0914 15:23:57.789273    5075 client.go:168] LocalClient.Create starting
	I0914 15:23:57.789326    5075 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/ca.pem
	I0914 15:23:57.789352    5075 main.go:141] libmachine: Decoding PEM data...
	I0914 15:23:57.789363    5075 main.go:141] libmachine: Parsing certificate...
	I0914 15:23:57.789411    5075 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/cert.pem
	I0914 15:23:57.789430    5075 main.go:141] libmachine: Decoding PEM data...
	I0914 15:23:57.789438    5075 main.go:141] libmachine: Parsing certificate...
	I0914 15:23:57.789739    5075 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17243-1006/.minikube/cache/iso/arm64/minikube-v1.31.0-1694625400-17243-arm64.iso...
	I0914 15:23:57.906958    5075 main.go:141] libmachine: Creating SSH key...
	I0914 15:23:58.021592    5075 main.go:141] libmachine: Creating Disk image...
	I0914 15:23:58.021599    5075 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 15:23:58.021735    5075 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/false-710000/disk.qcow2.raw /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/false-710000/disk.qcow2
	I0914 15:23:58.030169    5075 main.go:141] libmachine: STDOUT: 
	I0914 15:23:58.030185    5075 main.go:141] libmachine: STDERR: 
	I0914 15:23:58.030241    5075 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/false-710000/disk.qcow2 +20000M
	I0914 15:23:58.037375    5075 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 15:23:58.037389    5075 main.go:141] libmachine: STDERR: 
	I0914 15:23:58.037406    5075 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/false-710000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/false-710000/disk.qcow2
	I0914 15:23:58.037413    5075 main.go:141] libmachine: Starting QEMU VM...
	I0914 15:23:58.037445    5075 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/false-710000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/false-710000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/false-710000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:97:1f:be:13:25 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/false-710000/disk.qcow2
	I0914 15:23:58.038907    5075 main.go:141] libmachine: STDOUT: 
	I0914 15:23:58.038921    5075 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 15:23:58.038937    5075 client.go:171] LocalClient.Create took 249.664625ms
	I0914 15:24:00.041058    5075 start.go:128] duration metric: createHost completed in 2.27216475s
	I0914 15:24:00.041136    5075 start.go:83] releasing machines lock for "false-710000", held for 2.2722935s
	W0914 15:24:00.041192    5075 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 15:24:00.047772    5075 out.go:177] * Deleting "false-710000" in qemu2 ...
	W0914 15:24:00.068696    5075 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 15:24:00.068722    5075 start.go:703] Will try again in 5 seconds ...
	I0914 15:24:05.070887    5075 start.go:365] acquiring machines lock for false-710000: {Name:mk831f2bd5a9a0c932c8a59972319d68edb53d07 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 15:24:05.071459    5075 start.go:369] acquired machines lock for "false-710000" in 440.042µs
	I0914 15:24:05.071603    5075 start.go:93] Provisioning new machine with config: &{Name:false-710000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.1 ClusterName:false-710000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 15:24:05.071867    5075 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 15:24:05.076853    5075 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0914 15:24:05.125137    5075 start.go:159] libmachine.API.Create for "false-710000" (driver="qemu2")
	I0914 15:24:05.125173    5075 client.go:168] LocalClient.Create starting
	I0914 15:24:05.125313    5075 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/ca.pem
	I0914 15:24:05.125367    5075 main.go:141] libmachine: Decoding PEM data...
	I0914 15:24:05.125385    5075 main.go:141] libmachine: Parsing certificate...
	I0914 15:24:05.125460    5075 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/cert.pem
	I0914 15:24:05.125495    5075 main.go:141] libmachine: Decoding PEM data...
	I0914 15:24:05.125510    5075 main.go:141] libmachine: Parsing certificate...
	I0914 15:24:05.126190    5075 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17243-1006/.minikube/cache/iso/arm64/minikube-v1.31.0-1694625400-17243-arm64.iso...
	I0914 15:24:05.253852    5075 main.go:141] libmachine: Creating SSH key...
	I0914 15:24:05.299363    5075 main.go:141] libmachine: Creating Disk image...
	I0914 15:24:05.299369    5075 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 15:24:05.299512    5075 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/false-710000/disk.qcow2.raw /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/false-710000/disk.qcow2
	I0914 15:24:05.307923    5075 main.go:141] libmachine: STDOUT: 
	I0914 15:24:05.307941    5075 main.go:141] libmachine: STDERR: 
	I0914 15:24:05.307995    5075 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/false-710000/disk.qcow2 +20000M
	I0914 15:24:05.315086    5075 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 15:24:05.315109    5075 main.go:141] libmachine: STDERR: 
	I0914 15:24:05.315126    5075 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/false-710000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/false-710000/disk.qcow2
	I0914 15:24:05.315130    5075 main.go:141] libmachine: Starting QEMU VM...
	I0914 15:24:05.315167    5075 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/false-710000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/false-710000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/false-710000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:e4:28:8b:55:42 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/false-710000/disk.qcow2
	I0914 15:24:05.316674    5075 main.go:141] libmachine: STDOUT: 
	I0914 15:24:05.316687    5075 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 15:24:05.316701    5075 client.go:171] LocalClient.Create took 191.527667ms
	I0914 15:24:07.318871    5075 start.go:128] duration metric: createHost completed in 2.247010292s
	I0914 15:24:07.318971    5075 start.go:83] releasing machines lock for "false-710000", held for 2.247530542s
	W0914 15:24:07.319392    5075 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p false-710000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-710000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 15:24:07.330040    5075 out.go:177] 
	W0914 15:24:07.334122    5075 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 15:24:07.334165    5075 out.go:239] * 
	* 
	W0914 15:24:07.336227    5075 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 15:24:07.345187    5075 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-710000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-710000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.843279417s)

                                                
                                                
-- stdout --
	* [enable-default-cni-710000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17243
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17243-1006/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17243-1006/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node enable-default-cni-710000 in cluster enable-default-cni-710000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-710000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 15:24:09.541867    5189 out.go:296] Setting OutFile to fd 1 ...
	I0914 15:24:09.542004    5189 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 15:24:09.542007    5189 out.go:309] Setting ErrFile to fd 2...
	I0914 15:24:09.542010    5189 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 15:24:09.542141    5189 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17243-1006/.minikube/bin
	I0914 15:24:09.543233    5189 out.go:303] Setting JSON to false
	I0914 15:24:09.558163    5189 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3223,"bootTime":1694727026,"procs":413,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0914 15:24:09.558245    5189 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0914 15:24:09.563955    5189 out.go:177] * [enable-default-cni-710000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0914 15:24:09.575706    5189 out.go:177]   - MINIKUBE_LOCATION=17243
	I0914 15:24:09.571912    5189 notify.go:220] Checking for updates...
	I0914 15:24:09.581844    5189 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17243-1006/kubeconfig
	I0914 15:24:09.583214    5189 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 15:24:09.586838    5189 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 15:24:09.589887    5189 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17243-1006/.minikube
	I0914 15:24:09.591254    5189 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 15:24:09.595155    5189 config.go:182] Loaded profile config "multinode-463000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0914 15:24:09.595201    5189 driver.go:373] Setting default libvirt URI to qemu:///system
	I0914 15:24:09.598884    5189 out.go:177] * Using the qemu2 driver based on user configuration
	I0914 15:24:09.605846    5189 start.go:298] selected driver: qemu2
	I0914 15:24:09.605850    5189 start.go:902] validating driver "qemu2" against <nil>
	I0914 15:24:09.605856    5189 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 15:24:09.607796    5189 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0914 15:24:09.610838    5189 out.go:177] * Automatically selected the socket_vmnet network
	E0914 15:24:09.613886    5189 start_flags.go:455] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0914 15:24:09.613907    5189 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 15:24:09.613926    5189 cni.go:84] Creating CNI manager for "bridge"
	I0914 15:24:09.613932    5189 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0914 15:24:09.613942    5189 start_flags.go:321] config:
	{Name:enable-default-cni-710000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:enable-default-cni-710000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Sta
ticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 15:24:09.618272    5189 iso.go:125] acquiring lock: {Name:mkd5b68b252c18146a825ff5365883948b3c6983 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 15:24:09.626897    5189 out.go:177] * Starting control plane node enable-default-cni-710000 in cluster enable-default-cni-710000
	I0914 15:24:09.630826    5189 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0914 15:24:09.630843    5189 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0914 15:24:09.630857    5189 cache.go:57] Caching tarball of preloaded images
	I0914 15:24:09.630931    5189 preload.go:174] Found /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0914 15:24:09.630937    5189 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0914 15:24:09.631010    5189 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/enable-default-cni-710000/config.json ...
	I0914 15:24:09.631023    5189 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/enable-default-cni-710000/config.json: {Name:mka96a3eb056580789b7fd77f7e96c06c01542ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 15:24:09.631230    5189 start.go:365] acquiring machines lock for enable-default-cni-710000: {Name:mk831f2bd5a9a0c932c8a59972319d68edb53d07 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 15:24:09.631267    5189 start.go:369] acquired machines lock for "enable-default-cni-710000" in 25.541µs
	I0914 15:24:09.631279    5189 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-710000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:enable-default-cni-710000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 15:24:09.631319    5189 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 15:24:09.638875    5189 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0914 15:24:09.655595    5189 start.go:159] libmachine.API.Create for "enable-default-cni-710000" (driver="qemu2")
	I0914 15:24:09.655620    5189 client.go:168] LocalClient.Create starting
	I0914 15:24:09.655678    5189 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/ca.pem
	I0914 15:24:09.655707    5189 main.go:141] libmachine: Decoding PEM data...
	I0914 15:24:09.655718    5189 main.go:141] libmachine: Parsing certificate...
	I0914 15:24:09.655763    5189 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/cert.pem
	I0914 15:24:09.655781    5189 main.go:141] libmachine: Decoding PEM data...
	I0914 15:24:09.655791    5189 main.go:141] libmachine: Parsing certificate...
	I0914 15:24:09.656169    5189 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17243-1006/.minikube/cache/iso/arm64/minikube-v1.31.0-1694625400-17243-arm64.iso...
	I0914 15:24:09.772084    5189 main.go:141] libmachine: Creating SSH key...
	I0914 15:24:09.841415    5189 main.go:141] libmachine: Creating Disk image...
	I0914 15:24:09.841421    5189 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 15:24:09.841545    5189 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/enable-default-cni-710000/disk.qcow2.raw /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/enable-default-cni-710000/disk.qcow2
	I0914 15:24:09.850037    5189 main.go:141] libmachine: STDOUT: 
	I0914 15:24:09.850052    5189 main.go:141] libmachine: STDERR: 
	I0914 15:24:09.850101    5189 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/enable-default-cni-710000/disk.qcow2 +20000M
	I0914 15:24:09.857197    5189 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 15:24:09.857209    5189 main.go:141] libmachine: STDERR: 
	I0914 15:24:09.857228    5189 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/enable-default-cni-710000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/enable-default-cni-710000/disk.qcow2
	I0914 15:24:09.857236    5189 main.go:141] libmachine: Starting QEMU VM...
	I0914 15:24:09.857440    5189 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/enable-default-cni-710000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/enable-default-cni-710000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/enable-default-cni-710000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:89:1d:f6:1a:2f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/enable-default-cni-710000/disk.qcow2
	I0914 15:24:09.859962    5189 main.go:141] libmachine: STDOUT: 
	I0914 15:24:09.859995    5189 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 15:24:09.860020    5189 client.go:171] LocalClient.Create took 204.395166ms
	I0914 15:24:11.862148    5189 start.go:128] duration metric: createHost completed in 2.230855125s
	I0914 15:24:11.862221    5189 start.go:83] releasing machines lock for "enable-default-cni-710000", held for 2.230992s
	W0914 15:24:11.862281    5189 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 15:24:11.870399    5189 out.go:177] * Deleting "enable-default-cni-710000" in qemu2 ...
	W0914 15:24:11.890222    5189 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 15:24:11.890254    5189 start.go:703] Will try again in 5 seconds ...
	I0914 15:24:16.892324    5189 start.go:365] acquiring machines lock for enable-default-cni-710000: {Name:mk831f2bd5a9a0c932c8a59972319d68edb53d07 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 15:24:16.892796    5189 start.go:369] acquired machines lock for "enable-default-cni-710000" in 382.416µs
	I0914 15:24:16.892909    5189 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-710000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:enable-default-cni-710000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 15:24:16.893167    5189 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 15:24:16.901749    5189 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0914 15:24:16.952257    5189 start.go:159] libmachine.API.Create for "enable-default-cni-710000" (driver="qemu2")
	I0914 15:24:16.952298    5189 client.go:168] LocalClient.Create starting
	I0914 15:24:16.952450    5189 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/ca.pem
	I0914 15:24:16.952526    5189 main.go:141] libmachine: Decoding PEM data...
	I0914 15:24:16.952548    5189 main.go:141] libmachine: Parsing certificate...
	I0914 15:24:16.952621    5189 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/cert.pem
	I0914 15:24:16.952661    5189 main.go:141] libmachine: Decoding PEM data...
	I0914 15:24:16.952682    5189 main.go:141] libmachine: Parsing certificate...
	I0914 15:24:16.953215    5189 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17243-1006/.minikube/cache/iso/arm64/minikube-v1.31.0-1694625400-17243-arm64.iso...
	I0914 15:24:17.081377    5189 main.go:141] libmachine: Creating SSH key...
	I0914 15:24:17.295700    5189 main.go:141] libmachine: Creating Disk image...
	I0914 15:24:17.295707    5189 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 15:24:17.295872    5189 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/enable-default-cni-710000/disk.qcow2.raw /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/enable-default-cni-710000/disk.qcow2
	I0914 15:24:17.305008    5189 main.go:141] libmachine: STDOUT: 
	I0914 15:24:17.305025    5189 main.go:141] libmachine: STDERR: 
	I0914 15:24:17.305079    5189 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/enable-default-cni-710000/disk.qcow2 +20000M
	I0914 15:24:17.312286    5189 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 15:24:17.312300    5189 main.go:141] libmachine: STDERR: 
	I0914 15:24:17.312313    5189 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/enable-default-cni-710000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/enable-default-cni-710000/disk.qcow2
	I0914 15:24:17.312323    5189 main.go:141] libmachine: Starting QEMU VM...
	I0914 15:24:17.312365    5189 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/enable-default-cni-710000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/enable-default-cni-710000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/enable-default-cni-710000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:0f:72:35:1f:c5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/enable-default-cni-710000/disk.qcow2
	I0914 15:24:17.313896    5189 main.go:141] libmachine: STDOUT: 
	I0914 15:24:17.313911    5189 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 15:24:17.313924    5189 client.go:171] LocalClient.Create took 361.628334ms
	I0914 15:24:19.316164    5189 start.go:128] duration metric: createHost completed in 2.422940541s
	I0914 15:24:19.316238    5189 start.go:83] releasing machines lock for "enable-default-cni-710000", held for 2.423468417s
	W0914 15:24:19.316621    5189 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-710000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-710000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 15:24:19.326268    5189 out.go:177] 
	W0914 15:24:19.331428    5189 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 15:24:19.331461    5189 out.go:239] * 
	* 
	W0914 15:24:19.334134    5189 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 15:24:19.343230    5189 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-710000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
E0914 15:24:30.510418    1425 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/client.crt: no such file or directory
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-710000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.788305875s)

                                                
                                                
-- stdout --
	* [flannel-710000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17243
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17243-1006/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17243-1006/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node flannel-710000 in cluster flannel-710000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-710000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 15:24:21.535509    5306 out.go:296] Setting OutFile to fd 1 ...
	I0914 15:24:21.535649    5306 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 15:24:21.535652    5306 out.go:309] Setting ErrFile to fd 2...
	I0914 15:24:21.535655    5306 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 15:24:21.535804    5306 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17243-1006/.minikube/bin
	I0914 15:24:21.536825    5306 out.go:303] Setting JSON to false
	I0914 15:24:21.552125    5306 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3235,"bootTime":1694727026,"procs":416,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0914 15:24:21.552190    5306 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0914 15:24:21.557535    5306 out.go:177] * [flannel-710000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0914 15:24:21.564539    5306 out.go:177]   - MINIKUBE_LOCATION=17243
	I0914 15:24:21.564589    5306 notify.go:220] Checking for updates...
	I0914 15:24:21.570465    5306 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17243-1006/kubeconfig
	I0914 15:24:21.573522    5306 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 15:24:21.577524    5306 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 15:24:21.580519    5306 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17243-1006/.minikube
	I0914 15:24:21.583501    5306 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 15:24:21.586859    5306 config.go:182] Loaded profile config "multinode-463000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0914 15:24:21.586913    5306 driver.go:373] Setting default libvirt URI to qemu:///system
	I0914 15:24:21.591506    5306 out.go:177] * Using the qemu2 driver based on user configuration
	I0914 15:24:21.598468    5306 start.go:298] selected driver: qemu2
	I0914 15:24:21.598473    5306 start.go:902] validating driver "qemu2" against <nil>
	I0914 15:24:21.598479    5306 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 15:24:21.600420    5306 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0914 15:24:21.604484    5306 out.go:177] * Automatically selected the socket_vmnet network
	I0914 15:24:21.607586    5306 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 15:24:21.607625    5306 cni.go:84] Creating CNI manager for "flannel"
	I0914 15:24:21.607631    5306 start_flags.go:316] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0914 15:24:21.607638    5306 start_flags.go:321] config:
	{Name:flannel-710000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:flannel-710000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: S
SHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 15:24:21.612003    5306 iso.go:125] acquiring lock: {Name:mkd5b68b252c18146a825ff5365883948b3c6983 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 15:24:21.619499    5306 out.go:177] * Starting control plane node flannel-710000 in cluster flannel-710000
	I0914 15:24:21.622430    5306 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0914 15:24:21.622450    5306 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0914 15:24:21.622463    5306 cache.go:57] Caching tarball of preloaded images
	I0914 15:24:21.622540    5306 preload.go:174] Found /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0914 15:24:21.622546    5306 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0914 15:24:21.622626    5306 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/flannel-710000/config.json ...
	I0914 15:24:21.622640    5306 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/flannel-710000/config.json: {Name:mkcf4388c36ab6713e21e6c3047c1dd8a42ce8a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 15:24:21.622854    5306 start.go:365] acquiring machines lock for flannel-710000: {Name:mk831f2bd5a9a0c932c8a59972319d68edb53d07 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 15:24:21.622893    5306 start.go:369] acquired machines lock for "flannel-710000" in 26.875µs
	I0914 15:24:21.622909    5306 start.go:93] Provisioning new machine with config: &{Name:flannel-710000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.1 ClusterName:flannel-710000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 15:24:21.622949    5306 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 15:24:21.630385    5306 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0914 15:24:21.647614    5306 start.go:159] libmachine.API.Create for "flannel-710000" (driver="qemu2")
	I0914 15:24:21.647661    5306 client.go:168] LocalClient.Create starting
	I0914 15:24:21.647736    5306 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/ca.pem
	I0914 15:24:21.647766    5306 main.go:141] libmachine: Decoding PEM data...
	I0914 15:24:21.647779    5306 main.go:141] libmachine: Parsing certificate...
	I0914 15:24:21.647826    5306 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/cert.pem
	I0914 15:24:21.647851    5306 main.go:141] libmachine: Decoding PEM data...
	I0914 15:24:21.647862    5306 main.go:141] libmachine: Parsing certificate...
	I0914 15:24:21.648227    5306 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17243-1006/.minikube/cache/iso/arm64/minikube-v1.31.0-1694625400-17243-arm64.iso...
	I0914 15:24:21.760062    5306 main.go:141] libmachine: Creating SSH key...
	I0914 15:24:21.851518    5306 main.go:141] libmachine: Creating Disk image...
	I0914 15:24:21.851524    5306 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 15:24:21.851676    5306 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/flannel-710000/disk.qcow2.raw /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/flannel-710000/disk.qcow2
	I0914 15:24:21.860132    5306 main.go:141] libmachine: STDOUT: 
	I0914 15:24:21.860148    5306 main.go:141] libmachine: STDERR: 
	I0914 15:24:21.860191    5306 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/flannel-710000/disk.qcow2 +20000M
	I0914 15:24:21.867404    5306 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 15:24:21.867415    5306 main.go:141] libmachine: STDERR: 
	I0914 15:24:21.867435    5306 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/flannel-710000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/flannel-710000/disk.qcow2
	I0914 15:24:21.867440    5306 main.go:141] libmachine: Starting QEMU VM...
	I0914 15:24:21.867475    5306 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/flannel-710000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/flannel-710000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/flannel-710000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:2c:b6:ec:19:02 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/flannel-710000/disk.qcow2
	I0914 15:24:21.868914    5306 main.go:141] libmachine: STDOUT: 
	I0914 15:24:21.868927    5306 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 15:24:21.868944    5306 client.go:171] LocalClient.Create took 221.281792ms
	I0914 15:24:23.871101    5306 start.go:128] duration metric: createHost completed in 2.248169584s
	I0914 15:24:23.871217    5306 start.go:83] releasing machines lock for "flannel-710000", held for 2.248356333s
	W0914 15:24:23.871356    5306 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 15:24:23.882665    5306 out.go:177] * Deleting "flannel-710000" in qemu2 ...
	W0914 15:24:23.901968    5306 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 15:24:23.902001    5306 start.go:703] Will try again in 5 seconds ...
	I0914 15:24:28.904192    5306 start.go:365] acquiring machines lock for flannel-710000: {Name:mk831f2bd5a9a0c932c8a59972319d68edb53d07 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 15:24:28.904648    5306 start.go:369] acquired machines lock for "flannel-710000" in 317.208µs
	I0914 15:24:28.904763    5306 start.go:93] Provisioning new machine with config: &{Name:flannel-710000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.1 ClusterName:flannel-710000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 15:24:28.905061    5306 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 15:24:28.914706    5306 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0914 15:24:28.963553    5306 start.go:159] libmachine.API.Create for "flannel-710000" (driver="qemu2")
	I0914 15:24:28.963599    5306 client.go:168] LocalClient.Create starting
	I0914 15:24:28.963744    5306 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/ca.pem
	I0914 15:24:28.963808    5306 main.go:141] libmachine: Decoding PEM data...
	I0914 15:24:28.963832    5306 main.go:141] libmachine: Parsing certificate...
	I0914 15:24:28.963903    5306 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/cert.pem
	I0914 15:24:28.963940    5306 main.go:141] libmachine: Decoding PEM data...
	I0914 15:24:28.963991    5306 main.go:141] libmachine: Parsing certificate...
	I0914 15:24:28.964470    5306 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17243-1006/.minikube/cache/iso/arm64/minikube-v1.31.0-1694625400-17243-arm64.iso...
	I0914 15:24:29.091878    5306 main.go:141] libmachine: Creating SSH key...
	I0914 15:24:29.237757    5306 main.go:141] libmachine: Creating Disk image...
	I0914 15:24:29.237765    5306 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 15:24:29.237908    5306 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/flannel-710000/disk.qcow2.raw /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/flannel-710000/disk.qcow2
	I0914 15:24:29.246937    5306 main.go:141] libmachine: STDOUT: 
	I0914 15:24:29.246955    5306 main.go:141] libmachine: STDERR: 
	I0914 15:24:29.247023    5306 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/flannel-710000/disk.qcow2 +20000M
	I0914 15:24:29.254436    5306 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 15:24:29.254449    5306 main.go:141] libmachine: STDERR: 
	I0914 15:24:29.254462    5306 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/flannel-710000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/flannel-710000/disk.qcow2
	I0914 15:24:29.254470    5306 main.go:141] libmachine: Starting QEMU VM...
	I0914 15:24:29.254516    5306 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/flannel-710000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/flannel-710000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/flannel-710000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:57:d9:54:57:95 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/flannel-710000/disk.qcow2
	I0914 15:24:29.256136    5306 main.go:141] libmachine: STDOUT: 
	I0914 15:24:29.256148    5306 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 15:24:29.256162    5306 client.go:171] LocalClient.Create took 292.565625ms
	I0914 15:24:31.258288    5306 start.go:128] duration metric: createHost completed in 2.353234417s
	I0914 15:24:31.258386    5306 start.go:83] releasing machines lock for "flannel-710000", held for 2.35374075s
	W0914 15:24:31.258845    5306 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p flannel-710000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-710000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 15:24:31.265537    5306 out.go:177] 
	W0914 15:24:31.270565    5306 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 15:24:31.270635    5306 out.go:239] * 
	* 
	W0914 15:24:31.273026    5306 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 15:24:31.282329    5306 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-710000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-710000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.759388917s)

                                                
                                                
-- stdout --
	* [bridge-710000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17243
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17243-1006/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17243-1006/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node bridge-710000 in cluster bridge-710000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-710000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 15:24:33.665244    5428 out.go:296] Setting OutFile to fd 1 ...
	I0914 15:24:33.665379    5428 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 15:24:33.665382    5428 out.go:309] Setting ErrFile to fd 2...
	I0914 15:24:33.665385    5428 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 15:24:33.665519    5428 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17243-1006/.minikube/bin
	I0914 15:24:33.666564    5428 out.go:303] Setting JSON to false
	I0914 15:24:33.681555    5428 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3247,"bootTime":1694727026,"procs":414,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0914 15:24:33.681617    5428 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0914 15:24:33.684590    5428 out.go:177] * [bridge-710000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0914 15:24:33.692917    5428 out.go:177]   - MINIKUBE_LOCATION=17243
	I0914 15:24:33.696958    5428 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17243-1006/kubeconfig
	I0914 15:24:33.692970    5428 notify.go:220] Checking for updates...
	I0914 15:24:33.702910    5428 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 15:24:33.705843    5428 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 15:24:33.708846    5428 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17243-1006/.minikube
	I0914 15:24:33.711898    5428 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 15:24:33.713714    5428 config.go:182] Loaded profile config "multinode-463000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0914 15:24:33.713761    5428 driver.go:373] Setting default libvirt URI to qemu:///system
	I0914 15:24:33.717875    5428 out.go:177] * Using the qemu2 driver based on user configuration
	I0914 15:24:33.724754    5428 start.go:298] selected driver: qemu2
	I0914 15:24:33.724758    5428 start.go:902] validating driver "qemu2" against <nil>
	I0914 15:24:33.724763    5428 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 15:24:33.726714    5428 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0914 15:24:33.729834    5428 out.go:177] * Automatically selected the socket_vmnet network
	I0914 15:24:33.732962    5428 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 15:24:33.732986    5428 cni.go:84] Creating CNI manager for "bridge"
	I0914 15:24:33.732991    5428 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0914 15:24:33.732998    5428 start_flags.go:321] config:
	{Name:bridge-710000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:bridge-710000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHA
gentPID:0 AutoPauseInterval:1m0s}
	I0914 15:24:33.737107    5428 iso.go:125] acquiring lock: {Name:mkd5b68b252c18146a825ff5365883948b3c6983 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 15:24:33.743845    5428 out.go:177] * Starting control plane node bridge-710000 in cluster bridge-710000
	I0914 15:24:33.747937    5428 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0914 15:24:33.747957    5428 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0914 15:24:33.747967    5428 cache.go:57] Caching tarball of preloaded images
	I0914 15:24:33.748057    5428 preload.go:174] Found /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0914 15:24:33.748063    5428 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0914 15:24:33.748139    5428 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/bridge-710000/config.json ...
	I0914 15:24:33.748152    5428 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/bridge-710000/config.json: {Name:mk819d890276ab1541c17198fb474e6b30b5c36f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 15:24:33.748359    5428 start.go:365] acquiring machines lock for bridge-710000: {Name:mk831f2bd5a9a0c932c8a59972319d68edb53d07 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 15:24:33.748391    5428 start.go:369] acquired machines lock for "bridge-710000" in 25.5µs
	I0914 15:24:33.748405    5428 start.go:93] Provisioning new machine with config: &{Name:bridge-710000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.1 ClusterName:bridge-710000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 15:24:33.748444    5428 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 15:24:33.752869    5428 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0914 15:24:33.769101    5428 start.go:159] libmachine.API.Create for "bridge-710000" (driver="qemu2")
	I0914 15:24:33.769125    5428 client.go:168] LocalClient.Create starting
	I0914 15:24:33.769192    5428 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/ca.pem
	I0914 15:24:33.769221    5428 main.go:141] libmachine: Decoding PEM data...
	I0914 15:24:33.769235    5428 main.go:141] libmachine: Parsing certificate...
	I0914 15:24:33.769275    5428 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/cert.pem
	I0914 15:24:33.769294    5428 main.go:141] libmachine: Decoding PEM data...
	I0914 15:24:33.769301    5428 main.go:141] libmachine: Parsing certificate...
	I0914 15:24:33.769669    5428 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17243-1006/.minikube/cache/iso/arm64/minikube-v1.31.0-1694625400-17243-arm64.iso...
	I0914 15:24:33.883104    5428 main.go:141] libmachine: Creating SSH key...
	I0914 15:24:33.975668    5428 main.go:141] libmachine: Creating Disk image...
	I0914 15:24:33.975676    5428 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 15:24:33.975825    5428 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/bridge-710000/disk.qcow2.raw /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/bridge-710000/disk.qcow2
	I0914 15:24:33.984387    5428 main.go:141] libmachine: STDOUT: 
	I0914 15:24:33.984403    5428 main.go:141] libmachine: STDERR: 
	I0914 15:24:33.984457    5428 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/bridge-710000/disk.qcow2 +20000M
	I0914 15:24:33.991687    5428 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 15:24:33.991699    5428 main.go:141] libmachine: STDERR: 
	I0914 15:24:33.991718    5428 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/bridge-710000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/bridge-710000/disk.qcow2
	I0914 15:24:33.991726    5428 main.go:141] libmachine: Starting QEMU VM...
	I0914 15:24:33.991761    5428 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/bridge-710000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/bridge-710000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/bridge-710000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:5c:32:04:a3:c2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/bridge-710000/disk.qcow2
	I0914 15:24:33.993243    5428 main.go:141] libmachine: STDOUT: 
	I0914 15:24:33.993257    5428 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 15:24:33.993275    5428 client.go:171] LocalClient.Create took 224.148833ms
	I0914 15:24:35.995464    5428 start.go:128] duration metric: createHost completed in 2.24704575s
	I0914 15:24:35.995511    5428 start.go:83] releasing machines lock for "bridge-710000", held for 2.247159125s
	W0914 15:24:35.995581    5428 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 15:24:36.003814    5428 out.go:177] * Deleting "bridge-710000" in qemu2 ...
	W0914 15:24:36.023875    5428 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 15:24:36.023905    5428 start.go:703] Will try again in 5 seconds ...
	I0914 15:24:41.026006    5428 start.go:365] acquiring machines lock for bridge-710000: {Name:mk831f2bd5a9a0c932c8a59972319d68edb53d07 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 15:24:41.026424    5428 start.go:369] acquired machines lock for "bridge-710000" in 329.666µs
	I0914 15:24:41.026552    5428 start.go:93] Provisioning new machine with config: &{Name:bridge-710000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.1 ClusterName:bridge-710000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 15:24:41.026834    5428 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 15:24:41.036408    5428 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0914 15:24:41.084232    5428 start.go:159] libmachine.API.Create for "bridge-710000" (driver="qemu2")
	I0914 15:24:41.084273    5428 client.go:168] LocalClient.Create starting
	I0914 15:24:41.084413    5428 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/ca.pem
	I0914 15:24:41.084478    5428 main.go:141] libmachine: Decoding PEM data...
	I0914 15:24:41.084501    5428 main.go:141] libmachine: Parsing certificate...
	I0914 15:24:41.084585    5428 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/cert.pem
	I0914 15:24:41.084627    5428 main.go:141] libmachine: Decoding PEM data...
	I0914 15:24:41.084654    5428 main.go:141] libmachine: Parsing certificate...
	I0914 15:24:41.085246    5428 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17243-1006/.minikube/cache/iso/arm64/minikube-v1.31.0-1694625400-17243-arm64.iso...
	I0914 15:24:41.215359    5428 main.go:141] libmachine: Creating SSH key...
	I0914 15:24:41.337963    5428 main.go:141] libmachine: Creating Disk image...
	I0914 15:24:41.337970    5428 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 15:24:41.338121    5428 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/bridge-710000/disk.qcow2.raw /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/bridge-710000/disk.qcow2
	I0914 15:24:41.346675    5428 main.go:141] libmachine: STDOUT: 
	I0914 15:24:41.346689    5428 main.go:141] libmachine: STDERR: 
	I0914 15:24:41.346740    5428 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/bridge-710000/disk.qcow2 +20000M
	I0914 15:24:41.353888    5428 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 15:24:41.353902    5428 main.go:141] libmachine: STDERR: 
	I0914 15:24:41.353921    5428 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/bridge-710000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/bridge-710000/disk.qcow2
	I0914 15:24:41.353928    5428 main.go:141] libmachine: Starting QEMU VM...
	I0914 15:24:41.353966    5428 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/bridge-710000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/bridge-710000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/bridge-710000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:5a:ed:ec:1f:dd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/bridge-710000/disk.qcow2
	I0914 15:24:41.355488    5428 main.go:141] libmachine: STDOUT: 
	I0914 15:24:41.355503    5428 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 15:24:41.355515    5428 client.go:171] LocalClient.Create took 271.242208ms
	I0914 15:24:43.357690    5428 start.go:128] duration metric: createHost completed in 2.330869583s
	I0914 15:24:43.357797    5428 start.go:83] releasing machines lock for "bridge-710000", held for 2.331398958s
	W0914 15:24:43.358313    5428 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p bridge-710000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-710000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 15:24:43.366777    5428 out.go:177] 
	W0914 15:24:43.371907    5428 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 15:24:43.371940    5428 out.go:239] * 
	* 
	W0914 15:24:43.374398    5428 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 15:24:43.383701    5428 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-710000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-710000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.623259584s)

                                                
                                                
-- stdout --
	* [kubenet-710000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17243
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17243-1006/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17243-1006/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node kubenet-710000 in cluster kubenet-710000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-710000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 15:24:45.570104    5547 out.go:296] Setting OutFile to fd 1 ...
	I0914 15:24:45.570230    5547 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 15:24:45.570233    5547 out.go:309] Setting ErrFile to fd 2...
	I0914 15:24:45.570236    5547 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 15:24:45.570380    5547 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17243-1006/.minikube/bin
	I0914 15:24:45.571413    5547 out.go:303] Setting JSON to false
	I0914 15:24:45.586555    5547 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3259,"bootTime":1694727026,"procs":413,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0914 15:24:45.586641    5547 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0914 15:24:45.591932    5547 out.go:177] * [kubenet-710000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0914 15:24:45.599994    5547 out.go:177]   - MINIKUBE_LOCATION=17243
	I0914 15:24:45.602922    5547 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17243-1006/kubeconfig
	I0914 15:24:45.600059    5547 notify.go:220] Checking for updates...
	I0914 15:24:45.605928    5547 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 15:24:45.608956    5547 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 15:24:45.612887    5547 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17243-1006/.minikube
	I0914 15:24:45.615929    5547 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 15:24:45.619298    5547 config.go:182] Loaded profile config "multinode-463000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0914 15:24:45.619345    5547 driver.go:373] Setting default libvirt URI to qemu:///system
	I0914 15:24:45.623877    5547 out.go:177] * Using the qemu2 driver based on user configuration
	I0914 15:24:45.630966    5547 start.go:298] selected driver: qemu2
	I0914 15:24:45.630971    5547 start.go:902] validating driver "qemu2" against <nil>
	I0914 15:24:45.630978    5547 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 15:24:45.632910    5547 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0914 15:24:45.635908    5547 out.go:177] * Automatically selected the socket_vmnet network
	I0914 15:24:45.640032    5547 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 15:24:45.640061    5547 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0914 15:24:45.640065    5547 start_flags.go:321] config:
	{Name:kubenet-710000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:kubenet-710000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHA
gentPID:0 AutoPauseInterval:1m0s}
	I0914 15:24:45.644152    5547 iso.go:125] acquiring lock: {Name:mkd5b68b252c18146a825ff5365883948b3c6983 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 15:24:45.650946    5547 out.go:177] * Starting control plane node kubenet-710000 in cluster kubenet-710000
	I0914 15:24:45.654945    5547 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0914 15:24:45.654965    5547 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0914 15:24:45.654981    5547 cache.go:57] Caching tarball of preloaded images
	I0914 15:24:45.655048    5547 preload.go:174] Found /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0914 15:24:45.655054    5547 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0914 15:24:45.655131    5547 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/kubenet-710000/config.json ...
	I0914 15:24:45.655150    5547 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/kubenet-710000/config.json: {Name:mk60651270e56731349a89fcf057fc51d40c076f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 15:24:45.655364    5547 start.go:365] acquiring machines lock for kubenet-710000: {Name:mk831f2bd5a9a0c932c8a59972319d68edb53d07 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 15:24:45.655396    5547 start.go:369] acquired machines lock for "kubenet-710000" in 25.75µs
	I0914 15:24:45.655407    5547 start.go:93] Provisioning new machine with config: &{Name:kubenet-710000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.1 ClusterName:kubenet-710000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 15:24:45.655472    5547 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 15:24:45.663916    5547 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0914 15:24:45.680819    5547 start.go:159] libmachine.API.Create for "kubenet-710000" (driver="qemu2")
	I0914 15:24:45.680847    5547 client.go:168] LocalClient.Create starting
	I0914 15:24:45.680924    5547 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/ca.pem
	I0914 15:24:45.680951    5547 main.go:141] libmachine: Decoding PEM data...
	I0914 15:24:45.680965    5547 main.go:141] libmachine: Parsing certificate...
	I0914 15:24:45.681011    5547 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/cert.pem
	I0914 15:24:45.681032    5547 main.go:141] libmachine: Decoding PEM data...
	I0914 15:24:45.681047    5547 main.go:141] libmachine: Parsing certificate...
	I0914 15:24:45.681394    5547 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17243-1006/.minikube/cache/iso/arm64/minikube-v1.31.0-1694625400-17243-arm64.iso...
	I0914 15:24:45.796419    5547 main.go:141] libmachine: Creating SSH key...
	I0914 15:24:45.831747    5547 main.go:141] libmachine: Creating Disk image...
	I0914 15:24:45.831753    5547 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 15:24:45.831886    5547 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/kubenet-710000/disk.qcow2.raw /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/kubenet-710000/disk.qcow2
	I0914 15:24:45.840320    5547 main.go:141] libmachine: STDOUT: 
	I0914 15:24:45.840335    5547 main.go:141] libmachine: STDERR: 
	I0914 15:24:45.840379    5547 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/kubenet-710000/disk.qcow2 +20000M
	I0914 15:24:45.847438    5547 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 15:24:45.847452    5547 main.go:141] libmachine: STDERR: 
	I0914 15:24:45.847470    5547 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/kubenet-710000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/kubenet-710000/disk.qcow2
	I0914 15:24:45.847480    5547 main.go:141] libmachine: Starting QEMU VM...
	I0914 15:24:45.847510    5547 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/kubenet-710000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/kubenet-710000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/kubenet-710000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:a2:4a:ed:c4:60 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/kubenet-710000/disk.qcow2
	I0914 15:24:45.849100    5547 main.go:141] libmachine: STDOUT: 
	I0914 15:24:45.849124    5547 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 15:24:45.849146    5547 client.go:171] LocalClient.Create took 168.293916ms
	I0914 15:24:47.851294    5547 start.go:128] duration metric: createHost completed in 2.195844625s
	I0914 15:24:47.851366    5547 start.go:83] releasing machines lock for "kubenet-710000", held for 2.196006875s
	W0914 15:24:47.851462    5547 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 15:24:47.860582    5547 out.go:177] * Deleting "kubenet-710000" in qemu2 ...
	W0914 15:24:47.884052    5547 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 15:24:47.884080    5547 start.go:703] Will try again in 5 seconds ...
	I0914 15:24:52.886237    5547 start.go:365] acquiring machines lock for kubenet-710000: {Name:mk831f2bd5a9a0c932c8a59972319d68edb53d07 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 15:24:52.886740    5547 start.go:369] acquired machines lock for "kubenet-710000" in 371.417µs
	I0914 15:24:52.886910    5547 start.go:93] Provisioning new machine with config: &{Name:kubenet-710000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.1 ClusterName:kubenet-710000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 15:24:52.887244    5547 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 15:24:52.896897    5547 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0914 15:24:52.940758    5547 start.go:159] libmachine.API.Create for "kubenet-710000" (driver="qemu2")
	I0914 15:24:52.940830    5547 client.go:168] LocalClient.Create starting
	I0914 15:24:52.940961    5547 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/ca.pem
	I0914 15:24:52.941039    5547 main.go:141] libmachine: Decoding PEM data...
	I0914 15:24:52.941068    5547 main.go:141] libmachine: Parsing certificate...
	I0914 15:24:52.941139    5547 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/cert.pem
	I0914 15:24:52.941175    5547 main.go:141] libmachine: Decoding PEM data...
	I0914 15:24:52.941187    5547 main.go:141] libmachine: Parsing certificate...
	I0914 15:24:52.941735    5547 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17243-1006/.minikube/cache/iso/arm64/minikube-v1.31.0-1694625400-17243-arm64.iso...
	I0914 15:24:53.071569    5547 main.go:141] libmachine: Creating SSH key...
	I0914 15:24:53.107162    5547 main.go:141] libmachine: Creating Disk image...
	I0914 15:24:53.107168    5547 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 15:24:53.107292    5547 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/kubenet-710000/disk.qcow2.raw /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/kubenet-710000/disk.qcow2
	I0914 15:24:53.115890    5547 main.go:141] libmachine: STDOUT: 
	I0914 15:24:53.115912    5547 main.go:141] libmachine: STDERR: 
	I0914 15:24:53.115955    5547 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/kubenet-710000/disk.qcow2 +20000M
	I0914 15:24:53.123035    5547 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 15:24:53.123043    5547 main.go:141] libmachine: STDERR: 
	I0914 15:24:53.123055    5547 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/kubenet-710000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/kubenet-710000/disk.qcow2
	I0914 15:24:53.123060    5547 main.go:141] libmachine: Starting QEMU VM...
	I0914 15:24:53.123099    5547 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/kubenet-710000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/kubenet-710000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/kubenet-710000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:c7:4e:d6:d8:55 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/kubenet-710000/disk.qcow2
	I0914 15:24:53.124578    5547 main.go:141] libmachine: STDOUT: 
	I0914 15:24:53.124594    5547 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 15:24:53.124605    5547 client.go:171] LocalClient.Create took 183.774125ms
	I0914 15:24:55.126723    5547 start.go:128] duration metric: createHost completed in 2.239501959s
	I0914 15:24:55.126788    5547 start.go:83] releasing machines lock for "kubenet-710000", held for 2.240071917s
	W0914 15:24:55.127160    5547 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-710000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-710000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 15:24:55.136777    5547 out.go:177] 
	W0914 15:24:55.141792    5547 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 15:24:55.141837    5547 out.go:239] * 
	* 
	W0914 15:24:55.144418    5547 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 15:24:55.152755    5547 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.63s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (10.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-018000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-018000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0: exit status 80 (10.079096458s)

                                                
                                                
-- stdout --
	* [old-k8s-version-018000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17243
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17243-1006/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17243-1006/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node old-k8s-version-018000 in cluster old-k8s-version-018000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-018000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 15:24:57.326211    5661 out.go:296] Setting OutFile to fd 1 ...
	I0914 15:24:57.326333    5661 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 15:24:57.326337    5661 out.go:309] Setting ErrFile to fd 2...
	I0914 15:24:57.326340    5661 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 15:24:57.326627    5661 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17243-1006/.minikube/bin
	I0914 15:24:57.328059    5661 out.go:303] Setting JSON to false
	I0914 15:24:57.343682    5661 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3271,"bootTime":1694727026,"procs":411,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0914 15:24:57.343764    5661 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0914 15:24:57.349316    5661 out.go:177] * [old-k8s-version-018000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0914 15:24:57.357335    5661 out.go:177]   - MINIKUBE_LOCATION=17243
	I0914 15:24:57.362317    5661 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17243-1006/kubeconfig
	I0914 15:24:57.357367    5661 notify.go:220] Checking for updates...
	I0914 15:24:57.368229    5661 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 15:24:57.371254    5661 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 15:24:57.372582    5661 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17243-1006/.minikube
	I0914 15:24:57.375274    5661 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 15:24:57.378653    5661 config.go:182] Loaded profile config "multinode-463000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0914 15:24:57.378711    5661 driver.go:373] Setting default libvirt URI to qemu:///system
	I0914 15:24:57.383087    5661 out.go:177] * Using the qemu2 driver based on user configuration
	I0914 15:24:57.390272    5661 start.go:298] selected driver: qemu2
	I0914 15:24:57.390276    5661 start.go:902] validating driver "qemu2" against <nil>
	I0914 15:24:57.390282    5661 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 15:24:57.392225    5661 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0914 15:24:57.395277    5661 out.go:177] * Automatically selected the socket_vmnet network
	I0914 15:24:57.398372    5661 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 15:24:57.398393    5661 cni.go:84] Creating CNI manager for ""
	I0914 15:24:57.398408    5661 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0914 15:24:57.398413    5661 start_flags.go:321] config:
	{Name:old-k8s-version-018000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-018000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSoc
k: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 15:24:57.402452    5661 iso.go:125] acquiring lock: {Name:mkd5b68b252c18146a825ff5365883948b3c6983 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 15:24:57.408242    5661 out.go:177] * Starting control plane node old-k8s-version-018000 in cluster old-k8s-version-018000
	I0914 15:24:57.412236    5661 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0914 15:24:57.412258    5661 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0914 15:24:57.412278    5661 cache.go:57] Caching tarball of preloaded images
	I0914 15:24:57.412339    5661 preload.go:174] Found /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0914 15:24:57.412346    5661 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0914 15:24:57.412424    5661 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/old-k8s-version-018000/config.json ...
	I0914 15:24:57.412448    5661 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/old-k8s-version-018000/config.json: {Name:mk32c1084c0bf32c790d4dd463e1165c920cae48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 15:24:57.412628    5661 start.go:365] acquiring machines lock for old-k8s-version-018000: {Name:mk831f2bd5a9a0c932c8a59972319d68edb53d07 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 15:24:57.412662    5661 start.go:369] acquired machines lock for "old-k8s-version-018000" in 25.833µs
	I0914 15:24:57.412675    5661 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-018000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-018000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 15:24:57.412714    5661 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 15:24:57.421256    5661 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0914 15:24:57.437874    5661 start.go:159] libmachine.API.Create for "old-k8s-version-018000" (driver="qemu2")
	I0914 15:24:57.437905    5661 client.go:168] LocalClient.Create starting
	I0914 15:24:57.437971    5661 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/ca.pem
	I0914 15:24:57.437999    5661 main.go:141] libmachine: Decoding PEM data...
	I0914 15:24:57.438012    5661 main.go:141] libmachine: Parsing certificate...
	I0914 15:24:57.438055    5661 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/cert.pem
	I0914 15:24:57.438073    5661 main.go:141] libmachine: Decoding PEM data...
	I0914 15:24:57.438082    5661 main.go:141] libmachine: Parsing certificate...
	I0914 15:24:57.438432    5661 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17243-1006/.minikube/cache/iso/arm64/minikube-v1.31.0-1694625400-17243-arm64.iso...
	I0914 15:24:57.555052    5661 main.go:141] libmachine: Creating SSH key...
	I0914 15:24:57.731678    5661 main.go:141] libmachine: Creating Disk image...
	I0914 15:24:57.731685    5661 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 15:24:57.731857    5661 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/old-k8s-version-018000/disk.qcow2.raw /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/old-k8s-version-018000/disk.qcow2
	I0914 15:24:57.740690    5661 main.go:141] libmachine: STDOUT: 
	I0914 15:24:57.740715    5661 main.go:141] libmachine: STDERR: 
	I0914 15:24:57.740776    5661 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/old-k8s-version-018000/disk.qcow2 +20000M
	I0914 15:24:57.748071    5661 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 15:24:57.748084    5661 main.go:141] libmachine: STDERR: 
	I0914 15:24:57.748103    5661 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/old-k8s-version-018000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/old-k8s-version-018000/disk.qcow2
	I0914 15:24:57.748108    5661 main.go:141] libmachine: Starting QEMU VM...
	I0914 15:24:57.748142    5661 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/old-k8s-version-018000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/old-k8s-version-018000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/old-k8s-version-018000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:94:18:5c:46:e0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/old-k8s-version-018000/disk.qcow2
	I0914 15:24:57.749706    5661 main.go:141] libmachine: STDOUT: 
	I0914 15:24:57.749723    5661 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 15:24:57.749741    5661 client.go:171] LocalClient.Create took 311.835292ms
	I0914 15:24:59.751943    5661 start.go:128] duration metric: createHost completed in 2.339249125s
	I0914 15:24:59.752017    5661 start.go:83] releasing machines lock for "old-k8s-version-018000", held for 2.339396292s
	W0914 15:24:59.752128    5661 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 15:24:59.761483    5661 out.go:177] * Deleting "old-k8s-version-018000" in qemu2 ...
	W0914 15:24:59.782353    5661 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 15:24:59.782416    5661 start.go:703] Will try again in 5 seconds ...
	I0914 15:25:04.784049    5661 start.go:365] acquiring machines lock for old-k8s-version-018000: {Name:mk831f2bd5a9a0c932c8a59972319d68edb53d07 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 15:25:04.784532    5661 start.go:369] acquired machines lock for "old-k8s-version-018000" in 391.334µs
	I0914 15:25:04.784698    5661 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-018000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-018000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 15:25:04.785066    5661 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 15:25:04.789650    5661 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0914 15:25:04.836389    5661 start.go:159] libmachine.API.Create for "old-k8s-version-018000" (driver="qemu2")
	I0914 15:25:04.836441    5661 client.go:168] LocalClient.Create starting
	I0914 15:25:04.836573    5661 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/ca.pem
	I0914 15:25:04.836643    5661 main.go:141] libmachine: Decoding PEM data...
	I0914 15:25:04.836666    5661 main.go:141] libmachine: Parsing certificate...
	I0914 15:25:04.836753    5661 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/cert.pem
	I0914 15:25:04.836790    5661 main.go:141] libmachine: Decoding PEM data...
	I0914 15:25:04.836808    5661 main.go:141] libmachine: Parsing certificate...
	I0914 15:25:04.837319    5661 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17243-1006/.minikube/cache/iso/arm64/minikube-v1.31.0-1694625400-17243-arm64.iso...
	I0914 15:25:04.964302    5661 main.go:141] libmachine: Creating SSH key...
	I0914 15:25:05.315930    5661 main.go:141] libmachine: Creating Disk image...
	I0914 15:25:05.315940    5661 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 15:25:05.316079    5661 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/old-k8s-version-018000/disk.qcow2.raw /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/old-k8s-version-018000/disk.qcow2
	I0914 15:25:05.324907    5661 main.go:141] libmachine: STDOUT: 
	I0914 15:25:05.324923    5661 main.go:141] libmachine: STDERR: 
	I0914 15:25:05.324973    5661 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/old-k8s-version-018000/disk.qcow2 +20000M
	I0914 15:25:05.332436    5661 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 15:25:05.332460    5661 main.go:141] libmachine: STDERR: 
	I0914 15:25:05.332481    5661 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/old-k8s-version-018000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/old-k8s-version-018000/disk.qcow2
	I0914 15:25:05.332492    5661 main.go:141] libmachine: Starting QEMU VM...
	I0914 15:25:05.332532    5661 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/old-k8s-version-018000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/old-k8s-version-018000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/old-k8s-version-018000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:64:66:42:8d:52 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/old-k8s-version-018000/disk.qcow2
	I0914 15:25:05.334120    5661 main.go:141] libmachine: STDOUT: 
	I0914 15:25:05.334132    5661 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 15:25:05.334145    5661 client.go:171] LocalClient.Create took 497.710291ms
	I0914 15:25:07.335751    5661 start.go:128] duration metric: createHost completed in 2.550690667s
	I0914 15:25:07.335814    5661 start.go:83] releasing machines lock for "old-k8s-version-018000", held for 2.551313625s
	W0914 15:25:07.336156    5661 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-018000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-018000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 15:25:07.345666    5661 out.go:177] 
	W0914 15:25:07.351942    5661 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 15:25:07.351972    5661 out.go:239] * 
	* 
	W0914 15:25:07.354554    5661 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 15:25:07.364524    5661 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-018000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-018000 -n old-k8s-version-018000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-018000 -n old-k8s-version-018000: exit status 7 (66.575792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-018000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (10.15s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (1.93s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.6.2.327491383.exe start -p stopped-upgrade-612000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:196: (dbg) Non-zero exit: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.6.2.327491383.exe start -p stopped-upgrade-612000 --memory=2200 --vm-driver=qemu2 : fork/exec /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.6.2.327491383.exe: permission denied (5.329292ms)
version_upgrade_test.go:196: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.6.2.327491383.exe start -p stopped-upgrade-612000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:196: (dbg) Non-zero exit: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.6.2.327491383.exe start -p stopped-upgrade-612000 --memory=2200 --vm-driver=qemu2 : fork/exec /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.6.2.327491383.exe: permission denied (7.569458ms)
version_upgrade_test.go:196: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.6.2.327491383.exe start -p stopped-upgrade-612000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:196: (dbg) Non-zero exit: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.6.2.327491383.exe start -p stopped-upgrade-612000 --memory=2200 --vm-driver=qemu2 : fork/exec /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.6.2.327491383.exe: permission denied (8.666958ms)
version_upgrade_test.go:202: legacy v1.6.2 start failed: fork/exec /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.6.2.327491383.exe: permission denied
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (1.93s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.12s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-612000
version_upgrade_test.go:219: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p stopped-upgrade-612000: exit status 85 (111.5065ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-710000 sudo                                | bridge-710000          | jenkins | v1.31.2 | 14 Sep 23 15:24 PDT |                     |
	|         | systemctl status kubelet --all                       |                        |         |         |                     |                     |
	|         | --full --no-pager                                    |                        |         |         |                     |                     |
	| ssh     | -p bridge-710000 sudo                                | bridge-710000          | jenkins | v1.31.2 | 14 Sep 23 15:24 PDT |                     |
	|         | systemctl cat kubelet                                |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p bridge-710000 sudo                                | bridge-710000          | jenkins | v1.31.2 | 14 Sep 23 15:24 PDT |                     |
	|         | journalctl -xeu kubelet --all                        |                        |         |         |                     |                     |
	|         | --full --no-pager                                    |                        |         |         |                     |                     |
	| ssh     | -p bridge-710000 sudo cat                            | bridge-710000          | jenkins | v1.31.2 | 14 Sep 23 15:24 PDT |                     |
	|         | /etc/kubernetes/kubelet.conf                         |                        |         |         |                     |                     |
	| ssh     | -p bridge-710000 sudo cat                            | bridge-710000          | jenkins | v1.31.2 | 14 Sep 23 15:24 PDT |                     |
	|         | /var/lib/kubelet/config.yaml                         |                        |         |         |                     |                     |
	| ssh     | -p bridge-710000 sudo                                | bridge-710000          | jenkins | v1.31.2 | 14 Sep 23 15:24 PDT |                     |
	|         | systemctl status docker --all                        |                        |         |         |                     |                     |
	|         | --full --no-pager                                    |                        |         |         |                     |                     |
	| ssh     | -p bridge-710000 sudo                                | bridge-710000          | jenkins | v1.31.2 | 14 Sep 23 15:24 PDT |                     |
	|         | systemctl cat docker                                 |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p bridge-710000 sudo cat                            | bridge-710000          | jenkins | v1.31.2 | 14 Sep 23 15:24 PDT |                     |
	|         | /etc/docker/daemon.json                              |                        |         |         |                     |                     |
	| ssh     | -p bridge-710000 sudo docker                         | bridge-710000          | jenkins | v1.31.2 | 14 Sep 23 15:24 PDT |                     |
	|         | system info                                          |                        |         |         |                     |                     |
	| ssh     | -p bridge-710000 sudo                                | bridge-710000          | jenkins | v1.31.2 | 14 Sep 23 15:24 PDT |                     |
	|         | systemctl status cri-docker                          |                        |         |         |                     |                     |
	|         | --all --full --no-pager                              |                        |         |         |                     |                     |
	| ssh     | -p bridge-710000 sudo                                | bridge-710000          | jenkins | v1.31.2 | 14 Sep 23 15:24 PDT |                     |
	|         | systemctl cat cri-docker                             |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p bridge-710000 sudo cat                            | bridge-710000          | jenkins | v1.31.2 | 14 Sep 23 15:24 PDT |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                        |         |         |                     |                     |
	| ssh     | -p bridge-710000 sudo cat                            | bridge-710000          | jenkins | v1.31.2 | 14 Sep 23 15:24 PDT |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                        |         |         |                     |                     |
	| ssh     | -p bridge-710000 sudo                                | bridge-710000          | jenkins | v1.31.2 | 14 Sep 23 15:24 PDT |                     |
	|         | cri-dockerd --version                                |                        |         |         |                     |                     |
	| ssh     | -p bridge-710000 sudo                                | bridge-710000          | jenkins | v1.31.2 | 14 Sep 23 15:24 PDT |                     |
	|         | systemctl status containerd                          |                        |         |         |                     |                     |
	|         | --all --full --no-pager                              |                        |         |         |                     |                     |
	| ssh     | -p bridge-710000 sudo                                | bridge-710000          | jenkins | v1.31.2 | 14 Sep 23 15:24 PDT |                     |
	|         | systemctl cat containerd                             |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p bridge-710000 sudo cat                            | bridge-710000          | jenkins | v1.31.2 | 14 Sep 23 15:24 PDT |                     |
	|         | /lib/systemd/system/containerd.service               |                        |         |         |                     |                     |
	| ssh     | -p bridge-710000 sudo cat                            | bridge-710000          | jenkins | v1.31.2 | 14 Sep 23 15:24 PDT |                     |
	|         | /etc/containerd/config.toml                          |                        |         |         |                     |                     |
	| ssh     | -p bridge-710000 sudo                                | bridge-710000          | jenkins | v1.31.2 | 14 Sep 23 15:24 PDT |                     |
	|         | containerd config dump                               |                        |         |         |                     |                     |
	| ssh     | -p bridge-710000 sudo                                | bridge-710000          | jenkins | v1.31.2 | 14 Sep 23 15:24 PDT |                     |
	|         | systemctl status crio --all                          |                        |         |         |                     |                     |
	|         | --full --no-pager                                    |                        |         |         |                     |                     |
	| ssh     | -p bridge-710000 sudo                                | bridge-710000          | jenkins | v1.31.2 | 14 Sep 23 15:24 PDT |                     |
	|         | systemctl cat crio --no-pager                        |                        |         |         |                     |                     |
	| ssh     | -p bridge-710000 sudo find                           | bridge-710000          | jenkins | v1.31.2 | 14 Sep 23 15:24 PDT |                     |
	|         | /etc/crio -type f -exec sh -c                        |                        |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                        |         |         |                     |                     |
	| ssh     | -p bridge-710000 sudo crio                           | bridge-710000          | jenkins | v1.31.2 | 14 Sep 23 15:24 PDT |                     |
	|         | config                                               |                        |         |         |                     |                     |
	| delete  | -p bridge-710000                                     | bridge-710000          | jenkins | v1.31.2 | 14 Sep 23 15:24 PDT | 14 Sep 23 15:24 PDT |
	| start   | -p kubenet-710000                                    | kubenet-710000         | jenkins | v1.31.2 | 14 Sep 23 15:24 PDT |                     |
	|         | --memory=3072                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                        |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                        |         |         |                     |                     |
	|         | --network-plugin=kubenet                             |                        |         |         |                     |                     |
	|         | --driver=qemu2                                       |                        |         |         |                     |                     |
	| ssh     | -p kubenet-710000 sudo cat                           | kubenet-710000         | jenkins | v1.31.2 | 14 Sep 23 15:24 PDT |                     |
	|         | /etc/nsswitch.conf                                   |                        |         |         |                     |                     |
	| ssh     | -p kubenet-710000 sudo cat                           | kubenet-710000         | jenkins | v1.31.2 | 14 Sep 23 15:24 PDT |                     |
	|         | /etc/hosts                                           |                        |         |         |                     |                     |
	| ssh     | -p kubenet-710000 sudo cat                           | kubenet-710000         | jenkins | v1.31.2 | 14 Sep 23 15:24 PDT |                     |
	|         | /etc/resolv.conf                                     |                        |         |         |                     |                     |
	| ssh     | -p kubenet-710000 sudo crictl                        | kubenet-710000         | jenkins | v1.31.2 | 14 Sep 23 15:24 PDT |                     |
	|         | pods                                                 |                        |         |         |                     |                     |
	| ssh     | -p kubenet-710000 sudo crictl                        | kubenet-710000         | jenkins | v1.31.2 | 14 Sep 23 15:24 PDT |                     |
	|         | ps --all                                             |                        |         |         |                     |                     |
	| ssh     | -p kubenet-710000 sudo find                          | kubenet-710000         | jenkins | v1.31.2 | 14 Sep 23 15:24 PDT |                     |
	|         | /etc/cni -type f -exec sh -c                         |                        |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                        |         |         |                     |                     |
	| ssh     | -p kubenet-710000 sudo ip a s                        | kubenet-710000         | jenkins | v1.31.2 | 14 Sep 23 15:24 PDT |                     |
	| ssh     | -p kubenet-710000 sudo ip r s                        | kubenet-710000         | jenkins | v1.31.2 | 14 Sep 23 15:24 PDT |                     |
	| ssh     | -p kubenet-710000 sudo                               | kubenet-710000         | jenkins | v1.31.2 | 14 Sep 23 15:24 PDT |                     |
	|         | iptables-save                                        |                        |         |         |                     |                     |
	| ssh     | -p kubenet-710000 sudo                               | kubenet-710000         | jenkins | v1.31.2 | 14 Sep 23 15:24 PDT |                     |
	|         | iptables -t nat -L -n -v                             |                        |         |         |                     |                     |
	| ssh     | -p kubenet-710000 sudo                               | kubenet-710000         | jenkins | v1.31.2 | 14 Sep 23 15:24 PDT |                     |
	|         | systemctl status kubelet --all                       |                        |         |         |                     |                     |
	|         | --full --no-pager                                    |                        |         |         |                     |                     |
	| ssh     | -p kubenet-710000 sudo                               | kubenet-710000         | jenkins | v1.31.2 | 14 Sep 23 15:24 PDT |                     |
	|         | systemctl cat kubelet                                |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p kubenet-710000 sudo                               | kubenet-710000         | jenkins | v1.31.2 | 14 Sep 23 15:24 PDT |                     |
	|         | journalctl -xeu kubelet --all                        |                        |         |         |                     |                     |
	|         | --full --no-pager                                    |                        |         |         |                     |                     |
	| ssh     | -p kubenet-710000 sudo cat                           | kubenet-710000         | jenkins | v1.31.2 | 14 Sep 23 15:24 PDT |                     |
	|         | /etc/kubernetes/kubelet.conf                         |                        |         |         |                     |                     |
	| ssh     | -p kubenet-710000 sudo cat                           | kubenet-710000         | jenkins | v1.31.2 | 14 Sep 23 15:24 PDT |                     |
	|         | /var/lib/kubelet/config.yaml                         |                        |         |         |                     |                     |
	| ssh     | -p kubenet-710000 sudo                               | kubenet-710000         | jenkins | v1.31.2 | 14 Sep 23 15:24 PDT |                     |
	|         | systemctl status docker --all                        |                        |         |         |                     |                     |
	|         | --full --no-pager                                    |                        |         |         |                     |                     |
	| ssh     | -p kubenet-710000 sudo                               | kubenet-710000         | jenkins | v1.31.2 | 14 Sep 23 15:24 PDT |                     |
	|         | systemctl cat docker                                 |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p kubenet-710000 sudo cat                           | kubenet-710000         | jenkins | v1.31.2 | 14 Sep 23 15:24 PDT |                     |
	|         | /etc/docker/daemon.json                              |                        |         |         |                     |                     |
	| ssh     | -p kubenet-710000 sudo docker                        | kubenet-710000         | jenkins | v1.31.2 | 14 Sep 23 15:24 PDT |                     |
	|         | system info                                          |                        |         |         |                     |                     |
	| ssh     | -p kubenet-710000 sudo                               | kubenet-710000         | jenkins | v1.31.2 | 14 Sep 23 15:24 PDT |                     |
	|         | systemctl status cri-docker                          |                        |         |         |                     |                     |
	|         | --all --full --no-pager                              |                        |         |         |                     |                     |
	| ssh     | -p kubenet-710000 sudo                               | kubenet-710000         | jenkins | v1.31.2 | 14 Sep 23 15:24 PDT |                     |
	|         | systemctl cat cri-docker                             |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p kubenet-710000 sudo cat                           | kubenet-710000         | jenkins | v1.31.2 | 14 Sep 23 15:24 PDT |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                        |         |         |                     |                     |
	| ssh     | -p kubenet-710000 sudo cat                           | kubenet-710000         | jenkins | v1.31.2 | 14 Sep 23 15:24 PDT |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                        |         |         |                     |                     |
	| ssh     | -p kubenet-710000 sudo                               | kubenet-710000         | jenkins | v1.31.2 | 14 Sep 23 15:24 PDT |                     |
	|         | cri-dockerd --version                                |                        |         |         |                     |                     |
	| ssh     | -p kubenet-710000 sudo                               | kubenet-710000         | jenkins | v1.31.2 | 14 Sep 23 15:24 PDT |                     |
	|         | systemctl status containerd                          |                        |         |         |                     |                     |
	|         | --all --full --no-pager                              |                        |         |         |                     |                     |
	| ssh     | -p kubenet-710000 sudo                               | kubenet-710000         | jenkins | v1.31.2 | 14 Sep 23 15:24 PDT |                     |
	|         | systemctl cat containerd                             |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p kubenet-710000 sudo cat                           | kubenet-710000         | jenkins | v1.31.2 | 14 Sep 23 15:24 PDT |                     |
	|         | /lib/systemd/system/containerd.service               |                        |         |         |                     |                     |
	| ssh     | -p kubenet-710000 sudo cat                           | kubenet-710000         | jenkins | v1.31.2 | 14 Sep 23 15:24 PDT |                     |
	|         | /etc/containerd/config.toml                          |                        |         |         |                     |                     |
	| ssh     | -p kubenet-710000 sudo                               | kubenet-710000         | jenkins | v1.31.2 | 14 Sep 23 15:24 PDT |                     |
	|         | containerd config dump                               |                        |         |         |                     |                     |
	| ssh     | -p kubenet-710000 sudo                               | kubenet-710000         | jenkins | v1.31.2 | 14 Sep 23 15:24 PDT |                     |
	|         | systemctl status crio --all                          |                        |         |         |                     |                     |
	|         | --full --no-pager                                    |                        |         |         |                     |                     |
	| ssh     | -p kubenet-710000 sudo                               | kubenet-710000         | jenkins | v1.31.2 | 14 Sep 23 15:24 PDT |                     |
	|         | systemctl cat crio --no-pager                        |                        |         |         |                     |                     |
	| ssh     | -p kubenet-710000 sudo find                          | kubenet-710000         | jenkins | v1.31.2 | 14 Sep 23 15:24 PDT |                     |
	|         | /etc/crio -type f -exec sh -c                        |                        |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                        |         |         |                     |                     |
	| ssh     | -p kubenet-710000 sudo crio                          | kubenet-710000         | jenkins | v1.31.2 | 14 Sep 23 15:24 PDT |                     |
	|         | config                                               |                        |         |         |                     |                     |
	| delete  | -p kubenet-710000                                    | kubenet-710000         | jenkins | v1.31.2 | 14 Sep 23 15:24 PDT | 14 Sep 23 15:24 PDT |
	| start   | -p old-k8s-version-018000                            | old-k8s-version-018000 | jenkins | v1.31.2 | 14 Sep 23 15:24 PDT |                     |
	|         | --memory=2200                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                        |         |         |                     |                     |
	|         | --kvm-network=default                                |                        |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                        |                        |         |         |                     |                     |
	|         | --disable-driver-mounts                              |                        |         |         |                     |                     |
	|         | --keep-context=false                                 |                        |         |         |                     |                     |
	|         | --driver=qemu2                                       |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                         |                        |         |         |                     |                     |
	|---------|------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/14 15:24:57
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.21.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0914 15:24:57.326211    5661 out.go:296] Setting OutFile to fd 1 ...
	I0914 15:24:57.326333    5661 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 15:24:57.326337    5661 out.go:309] Setting ErrFile to fd 2...
	I0914 15:24:57.326340    5661 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 15:24:57.326627    5661 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17243-1006/.minikube/bin
	I0914 15:24:57.328059    5661 out.go:303] Setting JSON to false
	I0914 15:24:57.343682    5661 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3271,"bootTime":1694727026,"procs":411,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0914 15:24:57.343764    5661 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0914 15:24:57.349316    5661 out.go:177] * [old-k8s-version-018000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0914 15:24:57.357335    5661 out.go:177]   - MINIKUBE_LOCATION=17243
	I0914 15:24:57.362317    5661 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17243-1006/kubeconfig
	I0914 15:24:57.357367    5661 notify.go:220] Checking for updates...
	I0914 15:24:57.368229    5661 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 15:24:57.371254    5661 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 15:24:57.372582    5661 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17243-1006/.minikube
	I0914 15:24:57.375274    5661 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 15:24:57.378653    5661 config.go:182] Loaded profile config "multinode-463000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0914 15:24:57.378711    5661 driver.go:373] Setting default libvirt URI to qemu:///system
	I0914 15:24:57.383087    5661 out.go:177] * Using the qemu2 driver based on user configuration
	I0914 15:24:57.390272    5661 start.go:298] selected driver: qemu2
	I0914 15:24:57.390276    5661 start.go:902] validating driver "qemu2" against <nil>
	I0914 15:24:57.390282    5661 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 15:24:57.392225    5661 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0914 15:24:57.395277    5661 out.go:177] * Automatically selected the socket_vmnet network
	I0914 15:24:57.398372    5661 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 15:24:57.398393    5661 cni.go:84] Creating CNI manager for ""
	I0914 15:24:57.398408    5661 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0914 15:24:57.398413    5661 start_flags.go:321] config:
	{Name:old-k8s-version-018000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-018000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSoc
k: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 15:24:57.402452    5661 iso.go:125] acquiring lock: {Name:mkd5b68b252c18146a825ff5365883948b3c6983 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 15:24:57.408242    5661 out.go:177] * Starting control plane node old-k8s-version-018000 in cluster old-k8s-version-018000
	I0914 15:24:57.412236    5661 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0914 15:24:57.412258    5661 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0914 15:24:57.412278    5661 cache.go:57] Caching tarball of preloaded images
	I0914 15:24:57.412339    5661 preload.go:174] Found /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0914 15:24:57.412346    5661 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0914 15:24:57.412424    5661 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/old-k8s-version-018000/config.json ...
	I0914 15:24:57.412448    5661 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/old-k8s-version-018000/config.json: {Name:mk32c1084c0bf32c790d4dd463e1165c920cae48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 15:24:57.412628    5661 start.go:365] acquiring machines lock for old-k8s-version-018000: {Name:mk831f2bd5a9a0c932c8a59972319d68edb53d07 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 15:24:57.412662    5661 start.go:369] acquired machines lock for "old-k8s-version-018000" in 25.833µs
	I0914 15:24:57.412675    5661 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-018000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-018000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 15:24:57.412714    5661 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 15:24:57.421256    5661 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0914 15:24:57.437874    5661 start.go:159] libmachine.API.Create for "old-k8s-version-018000" (driver="qemu2")
	I0914 15:24:57.437905    5661 client.go:168] LocalClient.Create starting
	I0914 15:24:57.437971    5661 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/ca.pem
	I0914 15:24:57.437999    5661 main.go:141] libmachine: Decoding PEM data...
	I0914 15:24:57.438012    5661 main.go:141] libmachine: Parsing certificate...
	I0914 15:24:57.438055    5661 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/cert.pem
	I0914 15:24:57.438073    5661 main.go:141] libmachine: Decoding PEM data...
	I0914 15:24:57.438082    5661 main.go:141] libmachine: Parsing certificate...
	I0914 15:24:57.438432    5661 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17243-1006/.minikube/cache/iso/arm64/minikube-v1.31.0-1694625400-17243-arm64.iso...
	I0914 15:24:57.555052    5661 main.go:141] libmachine: Creating SSH key...
	I0914 15:24:57.731678    5661 main.go:141] libmachine: Creating Disk image...
	I0914 15:24:57.731685    5661 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 15:24:57.731857    5661 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/old-k8s-version-018000/disk.qcow2.raw /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/old-k8s-version-018000/disk.qcow2
	I0914 15:24:57.740690    5661 main.go:141] libmachine: STDOUT: 
	I0914 15:24:57.740715    5661 main.go:141] libmachine: STDERR: 
	I0914 15:24:57.740776    5661 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/old-k8s-version-018000/disk.qcow2 +20000M
	I0914 15:24:57.748071    5661 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 15:24:57.748084    5661 main.go:141] libmachine: STDERR: 
	I0914 15:24:57.748103    5661 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/old-k8s-version-018000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/old-k8s-version-018000/disk.qcow2
	I0914 15:24:57.748108    5661 main.go:141] libmachine: Starting QEMU VM...
	I0914 15:24:57.748142    5661 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/old-k8s-version-018000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/old-k8s-version-018000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/old-k8s-version-018000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:94:18:5c:46:e0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/old-k8s-version-018000/disk.qcow2
	I0914 15:24:57.749706    5661 main.go:141] libmachine: STDOUT: 
	I0914 15:24:57.749723    5661 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 15:24:57.749741    5661 client.go:171] LocalClient.Create took 311.835292ms
	I0914 15:24:59.751943    5661 start.go:128] duration metric: createHost completed in 2.339249125s
	I0914 15:24:59.752017    5661 start.go:83] releasing machines lock for "old-k8s-version-018000", held for 2.339396292s
	W0914 15:24:59.752128    5661 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 15:24:59.761483    5661 out.go:177] * Deleting "old-k8s-version-018000" in qemu2 ...
	
	* 
	* Profile "stopped-upgrade-612000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p stopped-upgrade-612000"

                                                
                                                
-- /stdout --
version_upgrade_test.go:221: `minikube logs` after upgrade to HEAD from v1.6.2 failed: exit status 85
--- FAIL: TestStoppedBinaryUpgrade/MinikubeLogs (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (9.82s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-399000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.28.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-399000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.28.1: exit status 80 (9.7577185s)

                                                
                                                
-- stdout --
	* [no-preload-399000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17243
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17243-1006/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17243-1006/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node no-preload-399000 in cluster no-preload-399000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-399000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 15:25:00.257665    5690 out.go:296] Setting OutFile to fd 1 ...
	I0914 15:25:00.257829    5690 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 15:25:00.257832    5690 out.go:309] Setting ErrFile to fd 2...
	I0914 15:25:00.257835    5690 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 15:25:00.257968    5690 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17243-1006/.minikube/bin
	I0914 15:25:00.258955    5690 out.go:303] Setting JSON to false
	I0914 15:25:00.274055    5690 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3274,"bootTime":1694727026,"procs":412,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0914 15:25:00.274136    5690 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0914 15:25:00.278779    5690 out.go:177] * [no-preload-399000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0914 15:25:00.290733    5690 out.go:177]   - MINIKUBE_LOCATION=17243
	I0914 15:25:00.293678    5690 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17243-1006/kubeconfig
	I0914 15:25:00.290800    5690 notify.go:220] Checking for updates...
	I0914 15:25:00.299718    5690 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 15:25:00.301075    5690 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 15:25:00.303740    5690 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17243-1006/.minikube
	I0914 15:25:00.306714    5690 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 15:25:00.310110    5690 config.go:182] Loaded profile config "multinode-463000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0914 15:25:00.310178    5690 config.go:182] Loaded profile config "old-k8s-version-018000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0914 15:25:00.310230    5690 driver.go:373] Setting default libvirt URI to qemu:///system
	I0914 15:25:00.314672    5690 out.go:177] * Using the qemu2 driver based on user configuration
	I0914 15:25:00.321722    5690 start.go:298] selected driver: qemu2
	I0914 15:25:00.321727    5690 start.go:902] validating driver "qemu2" against <nil>
	I0914 15:25:00.321733    5690 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 15:25:00.323702    5690 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0914 15:25:00.326683    5690 out.go:177] * Automatically selected the socket_vmnet network
	I0914 15:25:00.329757    5690 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 15:25:00.329784    5690 cni.go:84] Creating CNI manager for ""
	I0914 15:25:00.329793    5690 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0914 15:25:00.329798    5690 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0914 15:25:00.329805    5690 start_flags.go:321] config:
	{Name:no-preload-399000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:no-preload-399000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:d
ocker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSH
AgentPID:0 AutoPauseInterval:1m0s}
	I0914 15:25:00.333935    5690 iso.go:125] acquiring lock: {Name:mkd5b68b252c18146a825ff5365883948b3c6983 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 15:25:00.340690    5690 out.go:177] * Starting control plane node no-preload-399000 in cluster no-preload-399000
	I0914 15:25:00.344745    5690 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0914 15:25:00.344836    5690 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/no-preload-399000/config.json ...
	I0914 15:25:00.344855    5690 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/no-preload-399000/config.json: {Name:mk69deeecc7c26063068c9e7e4feb1004ba6ecf6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 15:25:00.344855    5690 cache.go:107] acquiring lock: {Name:mkd53f39c8984a1a6e842ba1d0d45a9f41a4874f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 15:25:00.344867    5690 cache.go:107] acquiring lock: {Name:mk38405f9d7ad5062b41817fc65cd3f9f3b4b705 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 15:25:00.344897    5690 cache.go:107] acquiring lock: {Name:mkecbaa62c5ff769f5bb17abe1e42cb21534b918 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 15:25:00.344913    5690 cache.go:115] /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0914 15:25:00.344921    5690 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/17243-1006/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 69.417µs
	I0914 15:25:00.344928    5690 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0914 15:25:00.344934    5690 cache.go:107] acquiring lock: {Name:mk5f3cab12a0c015786fc1c3a0e59b4998da30db Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 15:25:00.345011    5690 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.28.1
	I0914 15:25:00.345059    5690 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.28.1
	I0914 15:25:00.345109    5690 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.28.1
	I0914 15:25:00.345112    5690 cache.go:107] acquiring lock: {Name:mkf290d5b451e5e83db5b843811709e1fe8bd1b9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 15:25:00.345121    5690 cache.go:107] acquiring lock: {Name:mkda76779f206f98ebd56e90f225625e24992501 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 15:25:00.345135    5690 cache.go:107] acquiring lock: {Name:mkf4ea0b23692d9f84164f381f3c43b538c1faca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 15:25:00.345131    5690 cache.go:107] acquiring lock: {Name:mk3abd5de080bbd66581faee8e8db1c4cd224593 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 15:25:00.345240    5690 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.9-0
	I0914 15:25:00.345261    5690 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.28.1
	I0914 15:25:00.345277    5690 start.go:365] acquiring machines lock for no-preload-399000: {Name:mk831f2bd5a9a0c932c8a59972319d68edb53d07 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 15:25:00.345329    5690 start.go:369] acquired machines lock for "no-preload-399000" in 42.375µs
	I0914 15:25:00.345345    5690 start.go:93] Provisioning new machine with config: &{Name:no-preload-399000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.1 ClusterName:no-preload-399000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 15:25:00.345393    5690 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0914 15:25:00.345394    5690 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 15:25:00.345418    5690 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.10.1
	I0914 15:25:00.353525    5690 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0914 15:25:00.358237    5690 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.10.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.10.1
	I0914 15:25:00.358335    5690 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.28.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.28.1
	I0914 15:25:00.358373    5690 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.28.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.28.1
	I0914 15:25:00.358931    5690 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.28.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.28.1
	I0914 15:25:00.358935    5690 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0914 15:25:00.358930    5690 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.9-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.9-0
	I0914 15:25:00.358990    5690 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.28.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.28.1
	I0914 15:25:00.370262    5690 start.go:159] libmachine.API.Create for "no-preload-399000" (driver="qemu2")
	I0914 15:25:00.370290    5690 client.go:168] LocalClient.Create starting
	I0914 15:25:00.370373    5690 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/ca.pem
	I0914 15:25:00.370399    5690 main.go:141] libmachine: Decoding PEM data...
	I0914 15:25:00.370412    5690 main.go:141] libmachine: Parsing certificate...
	I0914 15:25:00.370450    5690 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/cert.pem
	I0914 15:25:00.370469    5690 main.go:141] libmachine: Decoding PEM data...
	I0914 15:25:00.370477    5690 main.go:141] libmachine: Parsing certificate...
	I0914 15:25:00.370833    5690 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17243-1006/.minikube/cache/iso/arm64/minikube-v1.31.0-1694625400-17243-arm64.iso...
	I0914 15:25:00.489665    5690 main.go:141] libmachine: Creating SSH key...
	I0914 15:25:00.523780    5690 main.go:141] libmachine: Creating Disk image...
	I0914 15:25:00.523791    5690 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 15:25:00.523949    5690 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/no-preload-399000/disk.qcow2.raw /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/no-preload-399000/disk.qcow2
	I0914 15:25:00.532913    5690 main.go:141] libmachine: STDOUT: 
	I0914 15:25:00.532931    5690 main.go:141] libmachine: STDERR: 
	I0914 15:25:00.532995    5690 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/no-preload-399000/disk.qcow2 +20000M
	I0914 15:25:00.540797    5690 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 15:25:00.540813    5690 main.go:141] libmachine: STDERR: 
	I0914 15:25:00.540834    5690 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/no-preload-399000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/no-preload-399000/disk.qcow2
	I0914 15:25:00.540844    5690 main.go:141] libmachine: Starting QEMU VM...
	I0914 15:25:00.540872    5690 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/no-preload-399000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/no-preload-399000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/no-preload-399000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:46:98:2b:ba:79 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/no-preload-399000/disk.qcow2
	I0914 15:25:00.542524    5690 main.go:141] libmachine: STDOUT: 
	I0914 15:25:00.542538    5690 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 15:25:00.542559    5690 client.go:171] LocalClient.Create took 172.266833ms
	I0914 15:25:00.950573    5690 cache.go:162] opening:  /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1
	I0914 15:25:00.998254    5690 cache.go:162] opening:  /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.1
	I0914 15:25:01.184103    5690 cache.go:162] opening:  /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.1
	I0914 15:25:01.389905    5690 cache.go:162] opening:  /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.1
	I0914 15:25:01.616837    5690 cache.go:162] opening:  /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9
	I0914 15:25:01.752280    5690 cache.go:157] /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0914 15:25:01.752301    5690 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/17243-1006/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 1.407291834s
	I0914 15:25:01.752309    5690 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0914 15:25:01.814750    5690 cache.go:162] opening:  /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.9-0
	I0914 15:25:02.024413    5690 cache.go:162] opening:  /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.1
	I0914 15:25:02.542746    5690 start.go:128] duration metric: createHost completed in 2.197358292s
	I0914 15:25:02.542778    5690 start.go:83] releasing machines lock for "no-preload-399000", held for 2.197489917s
	W0914 15:25:02.542817    5690 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 15:25:02.549844    5690 out.go:177] * Deleting "no-preload-399000" in qemu2 ...
	W0914 15:25:02.565474    5690 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 15:25:02.565501    5690 start.go:703] Will try again in 5 seconds ...
	I0914 15:25:03.328632    5690 cache.go:157] /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 exists
	I0914 15:25:03.328691    5690 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.10.1" -> "/Users/jenkins/minikube-integration/17243-1006/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1" took 2.983625208s
	I0914 15:25:03.328719    5690 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.10.1 -> /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 succeeded
	I0914 15:25:03.616797    5690 cache.go:157] /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.1 exists
	I0914 15:25:03.616877    5690 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.28.1" -> "/Users/jenkins/minikube-integration/17243-1006/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.1" took 3.272005959s
	I0914 15:25:03.616915    5690 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.28.1 -> /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.1 succeeded
	I0914 15:25:04.818331    5690 cache.go:157] /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.1 exists
	I0914 15:25:04.818362    5690 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.28.1" -> "/Users/jenkins/minikube-integration/17243-1006/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.1" took 4.473371458s
	I0914 15:25:04.818381    5690 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.28.1 -> /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.1 succeeded
	I0914 15:25:05.222808    5690 cache.go:157] /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.1 exists
	I0914 15:25:05.222828    5690 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.28.1" -> "/Users/jenkins/minikube-integration/17243-1006/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.1" took 4.878078459s
	I0914 15:25:05.222836    5690 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.28.1 -> /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.1 succeeded
	I0914 15:25:06.714187    5690 cache.go:157] /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.1 exists
	I0914 15:25:06.714260    5690 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.28.1" -> "/Users/jenkins/minikube-integration/17243-1006/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.1" took 6.369523375s
	I0914 15:25:06.714289    5690 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.28.1 -> /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.1 succeeded
	I0914 15:25:07.566281    5690 start.go:365] acquiring machines lock for no-preload-399000: {Name:mk831f2bd5a9a0c932c8a59972319d68edb53d07 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 15:25:07.566335    5690 start.go:369] acquired machines lock for "no-preload-399000" in 43µs
	I0914 15:25:07.566349    5690 start.go:93] Provisioning new machine with config: &{Name:no-preload-399000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.1 ClusterName:no-preload-399000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 15:25:07.566381    5690 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 15:25:07.577816    5690 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0914 15:25:07.591782    5690 start.go:159] libmachine.API.Create for "no-preload-399000" (driver="qemu2")
	I0914 15:25:07.591801    5690 client.go:168] LocalClient.Create starting
	I0914 15:25:07.591857    5690 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/ca.pem
	I0914 15:25:07.591884    5690 main.go:141] libmachine: Decoding PEM data...
	I0914 15:25:07.591894    5690 main.go:141] libmachine: Parsing certificate...
	I0914 15:25:07.591943    5690 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/cert.pem
	I0914 15:25:07.591962    5690 main.go:141] libmachine: Decoding PEM data...
	I0914 15:25:07.591970    5690 main.go:141] libmachine: Parsing certificate...
	I0914 15:25:07.595223    5690 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17243-1006/.minikube/cache/iso/arm64/minikube-v1.31.0-1694625400-17243-arm64.iso...
	I0914 15:25:07.768424    5690 main.go:141] libmachine: Creating SSH key...
	I0914 15:25:07.914082    5690 main.go:141] libmachine: Creating Disk image...
	I0914 15:25:07.914091    5690 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 15:25:07.914240    5690 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/no-preload-399000/disk.qcow2.raw /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/no-preload-399000/disk.qcow2
	I0914 15:25:07.936821    5690 main.go:141] libmachine: STDOUT: 
	I0914 15:25:07.936836    5690 main.go:141] libmachine: STDERR: 
	I0914 15:25:07.936891    5690 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/no-preload-399000/disk.qcow2 +20000M
	I0914 15:25:07.944869    5690 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 15:25:07.944880    5690 main.go:141] libmachine: STDERR: 
	I0914 15:25:07.944893    5690 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/no-preload-399000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/no-preload-399000/disk.qcow2
	I0914 15:25:07.944900    5690 main.go:141] libmachine: Starting QEMU VM...
	I0914 15:25:07.944944    5690 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/no-preload-399000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/no-preload-399000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/no-preload-399000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:22:b5:00:8f:ed -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/no-preload-399000/disk.qcow2
	I0914 15:25:07.946582    5690 main.go:141] libmachine: STDOUT: 
	I0914 15:25:07.946596    5690 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 15:25:07.946611    5690 client.go:171] LocalClient.Create took 354.814167ms
	I0914 15:25:09.946779    5690 start.go:128] duration metric: createHost completed in 2.380421041s
	I0914 15:25:09.946839    5690 start.go:83] releasing machines lock for "no-preload-399000", held for 2.380545875s
	W0914 15:25:09.947038    5690 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-399000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-399000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 15:25:09.960567    5690 out.go:177] 
	W0914 15:25:09.967533    5690 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 15:25:09.967582    5690 out.go:239] * 
	* 
	W0914 15:25:09.969920    5690 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 15:25:09.977563    5690 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-399000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.28.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-399000 -n no-preload-399000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-399000 -n no-preload-399000: exit status 7 (61.597792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-399000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (9.82s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-018000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-018000 create -f testdata/busybox.yaml: exit status 1 (30.155958ms)

                                                
                                                
** stderr ** 
	error: no openapi getter

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-018000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-018000 -n old-k8s-version-018000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-018000 -n old-k8s-version-018000: exit status 7 (28.551542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-018000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-018000 -n old-k8s-version-018000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-018000 -n old-k8s-version-018000: exit status 7 (28.146792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-018000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-018000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-018000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-018000 describe deploy/metrics-server -n kube-system: exit status 1 (29.799041ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-018000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-018000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-018000 -n old-k8s-version-018000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-018000 -n old-k8s-version-018000: exit status 7 (33.778792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-018000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (7.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-018000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-018000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0: exit status 80 (7.192243042s)

                                                
                                                
-- stdout --
	* [old-k8s-version-018000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17243
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17243-1006/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17243-1006/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.1
	* Using the qemu2 driver based on existing profile
	* Starting control plane node old-k8s-version-018000 in cluster old-k8s-version-018000
	* Restarting existing qemu2 VM for "old-k8s-version-018000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-018000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 15:25:07.877642    5824 out.go:296] Setting OutFile to fd 1 ...
	I0914 15:25:07.877752    5824 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 15:25:07.877755    5824 out.go:309] Setting ErrFile to fd 2...
	I0914 15:25:07.877758    5824 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 15:25:07.877903    5824 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17243-1006/.minikube/bin
	I0914 15:25:07.878909    5824 out.go:303] Setting JSON to false
	I0914 15:25:07.894184    5824 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3281,"bootTime":1694727026,"procs":411,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0914 15:25:07.894271    5824 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0914 15:25:07.899000    5824 out.go:177] * [old-k8s-version-018000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0914 15:25:07.910016    5824 out.go:177]   - MINIKUBE_LOCATION=17243
	I0914 15:25:07.906054    5824 notify.go:220] Checking for updates...
	I0914 15:25:07.916980    5824 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17243-1006/kubeconfig
	I0914 15:25:07.924012    5824 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 15:25:07.931817    5824 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 15:25:07.939865    5824 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17243-1006/.minikube
	I0914 15:25:07.946949    5824 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 15:25:07.951244    5824 config.go:182] Loaded profile config "old-k8s-version-018000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0914 15:25:07.955785    5824 out.go:177] * Kubernetes 1.28.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.1
	I0914 15:25:07.959949    5824 driver.go:373] Setting default libvirt URI to qemu:///system
	I0914 15:25:07.962974    5824 out.go:177] * Using the qemu2 driver based on existing profile
	I0914 15:25:07.969970    5824 start.go:298] selected driver: qemu2
	I0914 15:25:07.969974    5824 start.go:902] validating driver "qemu2" against &{Name:old-k8s-version-018000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-018000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequ
ested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 15:25:07.970026    5824 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 15:25:07.971819    5824 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 15:25:07.971844    5824 cni.go:84] Creating CNI manager for ""
	I0914 15:25:07.971851    5824 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0914 15:25:07.971855    5824 start_flags.go:321] config:
	{Name:old-k8s-version-018000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-018000 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Use
rs:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 15:25:07.975906    5824 iso.go:125] acquiring lock: {Name:mkd5b68b252c18146a825ff5365883948b3c6983 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 15:25:07.983963    5824 out.go:177] * Starting control plane node old-k8s-version-018000 in cluster old-k8s-version-018000
	I0914 15:25:07.986967    5824 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0914 15:25:07.986984    5824 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0914 15:25:07.986995    5824 cache.go:57] Caching tarball of preloaded images
	I0914 15:25:07.987056    5824 preload.go:174] Found /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0914 15:25:07.987061    5824 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0914 15:25:07.987132    5824 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/old-k8s-version-018000/config.json ...
	I0914 15:25:07.987383    5824 start.go:365] acquiring machines lock for old-k8s-version-018000: {Name:mk831f2bd5a9a0c932c8a59972319d68edb53d07 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 15:25:09.947006    5824 start.go:369] acquired machines lock for "old-k8s-version-018000" in 1.959619334s
	I0914 15:25:09.947205    5824 start.go:96] Skipping create...Using existing machine configuration
	I0914 15:25:09.947243    5824 fix.go:54] fixHost starting: 
	I0914 15:25:09.947899    5824 fix.go:102] recreateIfNeeded on old-k8s-version-018000: state=Stopped err=<nil>
	W0914 15:25:09.947943    5824 fix.go:128] unexpected machine state, will restart: <nil>
	I0914 15:25:09.964507    5824 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-018000" ...
	I0914 15:25:09.971759    5824 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/old-k8s-version-018000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/old-k8s-version-018000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/old-k8s-version-018000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:64:66:42:8d:52 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/old-k8s-version-018000/disk.qcow2
	I0914 15:25:09.981505    5824 main.go:141] libmachine: STDOUT: 
	I0914 15:25:09.981579    5824 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 15:25:09.981723    5824 fix.go:56] fixHost completed within 34.472083ms
	I0914 15:25:09.981748    5824 start.go:83] releasing machines lock for "old-k8s-version-018000", held for 34.702875ms
	W0914 15:25:09.981789    5824 start.go:688] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 15:25:09.981997    5824 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 15:25:09.982014    5824 start.go:703] Will try again in 5 seconds ...
	I0914 15:25:14.984226    5824 start.go:365] acquiring machines lock for old-k8s-version-018000: {Name:mk831f2bd5a9a0c932c8a59972319d68edb53d07 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 15:25:14.984727    5824 start.go:369] acquired machines lock for "old-k8s-version-018000" in 400.917µs
	I0914 15:25:14.984900    5824 start.go:96] Skipping create...Using existing machine configuration
	I0914 15:25:14.984922    5824 fix.go:54] fixHost starting: 
	I0914 15:25:14.985691    5824 fix.go:102] recreateIfNeeded on old-k8s-version-018000: state=Stopped err=<nil>
	W0914 15:25:14.985717    5824 fix.go:128] unexpected machine state, will restart: <nil>
	I0914 15:25:14.991304    5824 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-018000" ...
	I0914 15:25:14.999476    5824 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/old-k8s-version-018000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/old-k8s-version-018000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/old-k8s-version-018000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:64:66:42:8d:52 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/old-k8s-version-018000/disk.qcow2
	I0914 15:25:15.008896    5824 main.go:141] libmachine: STDOUT: 
	I0914 15:25:15.008946    5824 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 15:25:15.009026    5824 fix.go:56] fixHost completed within 24.107084ms
	I0914 15:25:15.009049    5824 start.go:83] releasing machines lock for "old-k8s-version-018000", held for 24.295458ms
	W0914 15:25:15.009265    5824 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-018000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-018000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 15:25:15.015286    5824 out.go:177] 
	W0914 15:25:15.019258    5824 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 15:25:15.019284    5824 out.go:239] * 
	* 
	W0914 15:25:15.021714    5824 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 15:25:15.031262    5824 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-018000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-018000 -n old-k8s-version-018000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-018000 -n old-k8s-version-018000: exit status 7 (67.005541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-018000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (7.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-399000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-399000 create -f testdata/busybox.yaml: exit status 1 (29.220375ms)

                                                
                                                
** stderr ** 
	error: no openapi getter

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-399000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-399000 -n no-preload-399000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-399000 -n no-preload-399000: exit status 7 (28.166625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-399000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-399000 -n no-preload-399000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-399000 -n no-preload-399000: exit status 7 (28.599583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-399000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-399000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-399000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-399000 describe deploy/metrics-server -n kube-system: exit status 1 (26.419208ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-399000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-399000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-399000 -n no-preload-399000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-399000 -n no-preload-399000: exit status 7 (27.563166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-399000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-399000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.28.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-399000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.28.1: exit status 80 (5.172655375s)

                                                
                                                
-- stdout --
	* [no-preload-399000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17243
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17243-1006/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17243-1006/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node no-preload-399000 in cluster no-preload-399000
	* Restarting existing qemu2 VM for "no-preload-399000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-399000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 15:25:10.427168    5852 out.go:296] Setting OutFile to fd 1 ...
	I0914 15:25:10.427313    5852 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 15:25:10.427316    5852 out.go:309] Setting ErrFile to fd 2...
	I0914 15:25:10.427318    5852 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 15:25:10.427589    5852 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17243-1006/.minikube/bin
	I0914 15:25:10.428801    5852 out.go:303] Setting JSON to false
	I0914 15:25:10.444092    5852 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3284,"bootTime":1694727026,"procs":411,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0914 15:25:10.444156    5852 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0914 15:25:10.449211    5852 out.go:177] * [no-preload-399000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0914 15:25:10.456178    5852 out.go:177]   - MINIKUBE_LOCATION=17243
	I0914 15:25:10.456305    5852 notify.go:220] Checking for updates...
	I0914 15:25:10.463216    5852 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17243-1006/kubeconfig
	I0914 15:25:10.466188    5852 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 15:25:10.469200    5852 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 15:25:10.472223    5852 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17243-1006/.minikube
	I0914 15:25:10.475127    5852 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 15:25:10.478513    5852 config.go:182] Loaded profile config "no-preload-399000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0914 15:25:10.478787    5852 driver.go:373] Setting default libvirt URI to qemu:///system
	I0914 15:25:10.483182    5852 out.go:177] * Using the qemu2 driver based on existing profile
	I0914 15:25:10.490237    5852 start.go:298] selected driver: qemu2
	I0914 15:25:10.490242    5852 start.go:902] validating driver "qemu2" against &{Name:no-preload-399000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.1 ClusterName:no-preload-399000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:
false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 15:25:10.490321    5852 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 15:25:10.492322    5852 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 15:25:10.492348    5852 cni.go:84] Creating CNI manager for ""
	I0914 15:25:10.492357    5852 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0914 15:25:10.492362    5852 start_flags.go:321] config:
	{Name:no-preload-399000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:no-preload-399000 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/mi
nikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 15:25:10.496513    5852 iso.go:125] acquiring lock: {Name:mkd5b68b252c18146a825ff5365883948b3c6983 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 15:25:10.503148    5852 out.go:177] * Starting control plane node no-preload-399000 in cluster no-preload-399000
	I0914 15:25:10.507200    5852 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0914 15:25:10.507286    5852 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/no-preload-399000/config.json ...
	I0914 15:25:10.507303    5852 cache.go:107] acquiring lock: {Name:mkd53f39c8984a1a6e842ba1d0d45a9f41a4874f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 15:25:10.507314    5852 cache.go:107] acquiring lock: {Name:mk5f3cab12a0c015786fc1c3a0e59b4998da30db Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 15:25:10.507371    5852 cache.go:115] /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0914 15:25:10.507377    5852 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/17243-1006/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 77.917µs
	I0914 15:25:10.507384    5852 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0914 15:25:10.507374    5852 cache.go:107] acquiring lock: {Name:mkda76779f206f98ebd56e90f225625e24992501 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 15:25:10.507391    5852 cache.go:115] /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.1 exists
	I0914 15:25:10.507394    5852 cache.go:107] acquiring lock: {Name:mkf290d5b451e5e83db5b843811709e1fe8bd1b9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 15:25:10.507396    5852 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.28.1" -> "/Users/jenkins/minikube-integration/17243-1006/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.1" took 98.083µs
	I0914 15:25:10.507405    5852 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.28.1 -> /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.1 succeeded
	I0914 15:25:10.507412    5852 cache.go:107] acquiring lock: {Name:mkf4ea0b23692d9f84164f381f3c43b538c1faca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 15:25:10.507412    5852 cache.go:107] acquiring lock: {Name:mk38405f9d7ad5062b41817fc65cd3f9f3b4b705 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 15:25:10.507421    5852 cache.go:107] acquiring lock: {Name:mk3abd5de080bbd66581faee8e8db1c4cd224593 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 15:25:10.507459    5852 cache.go:115] /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.1 exists
	I0914 15:25:10.507318    5852 cache.go:107] acquiring lock: {Name:mkecbaa62c5ff769f5bb17abe1e42cb21534b918 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 15:25:10.507478    5852 cache.go:115] /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 exists
	I0914 15:25:10.507510    5852 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.28.1" -> "/Users/jenkins/minikube-integration/17243-1006/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.1" took 98.625µs
	I0914 15:25:10.507516    5852 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.28.1 -> /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.1 succeeded
	I0914 15:25:10.507480    5852 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.9-0
	I0914 15:25:10.507571    5852 cache.go:115] /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.1 exists
	I0914 15:25:10.507581    5852 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.28.1" -> "/Users/jenkins/minikube-integration/17243-1006/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.1" took 169.167µs
	I0914 15:25:10.507587    5852 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.28.1 -> /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.1 succeeded
	I0914 15:25:10.507522    5852 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.10.1" -> "/Users/jenkins/minikube-integration/17243-1006/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1" took 92.833µs
	I0914 15:25:10.507606    5852 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.10.1 -> /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 succeeded
	I0914 15:25:10.507555    5852 cache.go:115] /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.1 exists
	I0914 15:25:10.507640    5852 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.28.1" -> "/Users/jenkins/minikube-integration/17243-1006/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.1" took 336.25µs
	I0914 15:25:10.507645    5852 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.28.1 -> /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.1 succeeded
	I0914 15:25:10.507622    5852 cache.go:115] /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0914 15:25:10.507649    5852 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/17243-1006/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 302.209µs
	I0914 15:25:10.507654    5852 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0914 15:25:10.507659    5852 start.go:365] acquiring machines lock for no-preload-399000: {Name:mk831f2bd5a9a0c932c8a59972319d68edb53d07 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 15:25:10.507718    5852 start.go:369] acquired machines lock for "no-preload-399000" in 43.709µs
	I0914 15:25:10.507730    5852 start.go:96] Skipping create...Using existing machine configuration
	I0914 15:25:10.507737    5852 fix.go:54] fixHost starting: 
	I0914 15:25:10.507880    5852 fix.go:102] recreateIfNeeded on no-preload-399000: state=Stopped err=<nil>
	W0914 15:25:10.507889    5852 fix.go:128] unexpected machine state, will restart: <nil>
	I0914 15:25:10.516150    5852 out.go:177] * Restarting existing qemu2 VM for "no-preload-399000" ...
	I0914 15:25:10.520176    5852 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/no-preload-399000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/no-preload-399000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/no-preload-399000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:22:b5:00:8f:ed -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/no-preload-399000/disk.qcow2
	I0914 15:25:10.520812    5852 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.9-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.9-0
	I0914 15:25:10.522462    5852 main.go:141] libmachine: STDOUT: 
	I0914 15:25:10.522485    5852 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 15:25:10.522517    5852 fix.go:56] fixHost completed within 14.7805ms
	I0914 15:25:10.522522    5852 start.go:83] releasing machines lock for "no-preload-399000", held for 14.798334ms
	W0914 15:25:10.522530    5852 start.go:688] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 15:25:10.522594    5852 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 15:25:10.522599    5852 start.go:703] Will try again in 5 seconds ...
	I0914 15:25:11.052453    5852 cache.go:162] opening:  /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.9-0
	I0914 15:25:15.524527    5852 start.go:365] acquiring machines lock for no-preload-399000: {Name:mk831f2bd5a9a0c932c8a59972319d68edb53d07 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 15:25:15.524583    5852 start.go:369] acquired machines lock for "no-preload-399000" in 40.375µs
	I0914 15:25:15.524598    5852 start.go:96] Skipping create...Using existing machine configuration
	I0914 15:25:15.524603    5852 fix.go:54] fixHost starting: 
	I0914 15:25:15.524744    5852 fix.go:102] recreateIfNeeded on no-preload-399000: state=Stopped err=<nil>
	W0914 15:25:15.524749    5852 fix.go:128] unexpected machine state, will restart: <nil>
	I0914 15:25:15.529421    5852 out.go:177] * Restarting existing qemu2 VM for "no-preload-399000" ...
	I0914 15:25:15.536448    5852 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/no-preload-399000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/no-preload-399000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/no-preload-399000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:22:b5:00:8f:ed -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/no-preload-399000/disk.qcow2
	I0914 15:25:15.538522    5852 main.go:141] libmachine: STDOUT: 
	I0914 15:25:15.538545    5852 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 15:25:15.538569    5852 fix.go:56] fixHost completed within 13.966459ms
	I0914 15:25:15.538574    5852 start.go:83] releasing machines lock for "no-preload-399000", held for 13.9825ms
	W0914 15:25:15.538631    5852 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-399000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-399000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 15:25:15.545437    5852 out.go:177] 
	W0914 15:25:15.548391    5852 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 15:25:15.548399    5852 out.go:239] * 
	* 
	W0914 15:25:15.548934    5852 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 15:25:15.563400    5852 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-399000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.28.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-399000 -n no-preload-399000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-399000 -n no-preload-399000: exit status 7 (29.444042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-399000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-018000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-018000 -n old-k8s-version-018000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-018000 -n old-k8s-version-018000: exit status 7 (32.054916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-018000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.05s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-018000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-018000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-018000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (25.945542ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-018000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-018000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-018000 -n old-k8s-version-018000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-018000 -n old-k8s-version-018000: exit status 7 (28.491208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-018000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.05s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p old-k8s-version-018000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p old-k8s-version-018000 "sudo crictl images -o json": exit status 89 (39.361709ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p old-k8s-version-018000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p old-k8s-version-018000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p old-k8s-version-018000"
start_stop_delete_test.go:304: v1.16.0 images missing (-want +got):
[]string{
- 	"k8s.gcr.io/coredns:1.6.2",
- 	"k8s.gcr.io/etcd:3.3.15-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.16.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.16.0",
- 	"k8s.gcr.io/kube-proxy:v1.16.0",
- 	"k8s.gcr.io/kube-scheduler:v1.16.0",
- 	"k8s.gcr.io/pause:3.1",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-018000 -n old-k8s-version-018000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-018000 -n old-k8s-version-018000: exit status 7 (28.619458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-018000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-018000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-018000 --alsologtostderr -v=1: exit status 89 (43.93625ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p old-k8s-version-018000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 15:25:15.294080    5885 out.go:296] Setting OutFile to fd 1 ...
	I0914 15:25:15.294466    5885 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 15:25:15.294469    5885 out.go:309] Setting ErrFile to fd 2...
	I0914 15:25:15.294472    5885 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 15:25:15.294610    5885 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17243-1006/.minikube/bin
	I0914 15:25:15.294808    5885 out.go:303] Setting JSON to false
	I0914 15:25:15.294818    5885 mustload.go:65] Loading cluster: old-k8s-version-018000
	I0914 15:25:15.295001    5885 config.go:182] Loaded profile config "old-k8s-version-018000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0914 15:25:15.299413    5885 out.go:177] * The control plane node must be running for this command
	I0914 15:25:15.307392    5885 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-018000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-018000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-018000 -n old-k8s-version-018000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-018000 -n old-k8s-version-018000: exit status 7 (29.199292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-018000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-018000 -n old-k8s-version-018000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-018000 -n old-k8s-version-018000: exit status 7 (28.587334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-018000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-399000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-399000 -n no-preload-399000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-399000 -n no-preload-399000: exit status 7 (28.971542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-399000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-399000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-399000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-399000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.713375ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-399000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-399000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-399000 -n no-preload-399000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-399000 -n no-preload-399000: exit status 7 (31.95225ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-399000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p no-preload-399000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p no-preload-399000 "sudo crictl images -o json": exit status 89 (41.998125ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p no-preload-399000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p no-preload-399000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p no-preload-399000"
start_stop_delete_test.go:304: v1.28.1 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.9-0",
- 	"registry.k8s.io/kube-apiserver:v1.28.1",
- 	"registry.k8s.io/kube-controller-manager:v1.28.1",
- 	"registry.k8s.io/kube-proxy:v1.28.1",
- 	"registry.k8s.io/kube-scheduler:v1.28.1",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-399000 -n no-preload-399000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-399000 -n no-preload-399000: exit status 7 (29.903167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-399000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (9.87s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-546000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-546000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.1: exit status 80 (9.813632959s)

                                                
                                                
-- stdout --
	* [embed-certs-546000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17243
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17243-1006/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17243-1006/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node embed-certs-546000 in cluster embed-certs-546000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-546000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 15:25:15.778524    5919 out.go:296] Setting OutFile to fd 1 ...
	I0914 15:25:15.778661    5919 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 15:25:15.778666    5919 out.go:309] Setting ErrFile to fd 2...
	I0914 15:25:15.778668    5919 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 15:25:15.778796    5919 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17243-1006/.minikube/bin
	I0914 15:25:15.780100    5919 out.go:303] Setting JSON to false
	I0914 15:25:15.797077    5919 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3289,"bootTime":1694727026,"procs":411,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0914 15:25:15.797141    5919 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0914 15:25:15.805891    5919 out.go:177] * [embed-certs-546000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0914 15:25:15.813881    5919 out.go:177]   - MINIKUBE_LOCATION=17243
	I0914 15:25:15.810927    5919 notify.go:220] Checking for updates...
	I0914 15:25:15.821823    5919 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17243-1006/kubeconfig
	I0914 15:25:15.825851    5919 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 15:25:15.827001    5919 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 15:25:15.830880    5919 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17243-1006/.minikube
	I0914 15:25:15.834862    5919 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 15:25:15.838129    5919 config.go:182] Loaded profile config "multinode-463000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0914 15:25:15.838198    5919 config.go:182] Loaded profile config "no-preload-399000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0914 15:25:15.838246    5919 driver.go:373] Setting default libvirt URI to qemu:///system
	I0914 15:25:15.841836    5919 out.go:177] * Using the qemu2 driver based on user configuration
	I0914 15:25:15.848828    5919 start.go:298] selected driver: qemu2
	I0914 15:25:15.848836    5919 start.go:902] validating driver "qemu2" against <nil>
	I0914 15:25:15.848842    5919 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 15:25:15.850681    5919 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0914 15:25:15.854829    5919 out.go:177] * Automatically selected the socket_vmnet network
	I0914 15:25:15.857915    5919 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 15:25:15.857939    5919 cni.go:84] Creating CNI manager for ""
	I0914 15:25:15.857947    5919 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0914 15:25:15.857950    5919 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0914 15:25:15.857957    5919 start_flags.go:321] config:
	{Name:embed-certs-546000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:embed-certs-546000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SS
HAgentPID:0 AutoPauseInterval:1m0s}
	I0914 15:25:15.862685    5919 iso.go:125] acquiring lock: {Name:mkd5b68b252c18146a825ff5365883948b3c6983 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 15:25:15.870850    5919 out.go:177] * Starting control plane node embed-certs-546000 in cluster embed-certs-546000
	I0914 15:25:15.874861    5919 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0914 15:25:15.874881    5919 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0914 15:25:15.874890    5919 cache.go:57] Caching tarball of preloaded images
	I0914 15:25:15.874948    5919 preload.go:174] Found /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0914 15:25:15.874953    5919 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0914 15:25:15.875016    5919 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/embed-certs-546000/config.json ...
	I0914 15:25:15.875028    5919 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/embed-certs-546000/config.json: {Name:mk9bb51ddb658ad4fe26a364df9de55e347c8228 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 15:25:15.875193    5919 start.go:365] acquiring machines lock for embed-certs-546000: {Name:mk831f2bd5a9a0c932c8a59972319d68edb53d07 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 15:25:15.875214    5919 start.go:369] acquired machines lock for "embed-certs-546000" in 16.167µs
	I0914 15:25:15.875226    5919 start.go:93] Provisioning new machine with config: &{Name:embed-certs-546000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.1 ClusterName:embed-certs-546000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mount
MSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 15:25:15.875271    5919 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 15:25:15.878810    5919 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0914 15:25:15.892830    5919 start.go:159] libmachine.API.Create for "embed-certs-546000" (driver="qemu2")
	I0914 15:25:15.892851    5919 client.go:168] LocalClient.Create starting
	I0914 15:25:15.892925    5919 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/ca.pem
	I0914 15:25:15.892949    5919 main.go:141] libmachine: Decoding PEM data...
	I0914 15:25:15.892960    5919 main.go:141] libmachine: Parsing certificate...
	I0914 15:25:15.893001    5919 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/cert.pem
	I0914 15:25:15.893018    5919 main.go:141] libmachine: Decoding PEM data...
	I0914 15:25:15.893026    5919 main.go:141] libmachine: Parsing certificate...
	I0914 15:25:15.893399    5919 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17243-1006/.minikube/cache/iso/arm64/minikube-v1.31.0-1694625400-17243-arm64.iso...
	I0914 15:25:16.051507    5919 main.go:141] libmachine: Creating SSH key...
	I0914 15:25:16.167083    5919 main.go:141] libmachine: Creating Disk image...
	I0914 15:25:16.167102    5919 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 15:25:16.167519    5919 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/embed-certs-546000/disk.qcow2.raw /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/embed-certs-546000/disk.qcow2
	I0914 15:25:16.176641    5919 main.go:141] libmachine: STDOUT: 
	I0914 15:25:16.176659    5919 main.go:141] libmachine: STDERR: 
	I0914 15:25:16.176729    5919 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/embed-certs-546000/disk.qcow2 +20000M
	I0914 15:25:16.184584    5919 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 15:25:16.184602    5919 main.go:141] libmachine: STDERR: 
	I0914 15:25:16.184616    5919 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/embed-certs-546000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/embed-certs-546000/disk.qcow2
	I0914 15:25:16.184625    5919 main.go:141] libmachine: Starting QEMU VM...
	I0914 15:25:16.184670    5919 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/embed-certs-546000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/embed-certs-546000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/embed-certs-546000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:1c:3d:64:26:d4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/embed-certs-546000/disk.qcow2
	I0914 15:25:16.186463    5919 main.go:141] libmachine: STDOUT: 
	I0914 15:25:16.186488    5919 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 15:25:16.186510    5919 client.go:171] LocalClient.Create took 293.658875ms
	I0914 15:25:18.188666    5919 start.go:128] duration metric: createHost completed in 2.313382625s
	I0914 15:25:18.188757    5919 start.go:83] releasing machines lock for "embed-certs-546000", held for 2.313582334s
	W0914 15:25:18.188815    5919 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 15:25:18.198904    5919 out.go:177] * Deleting "embed-certs-546000" in qemu2 ...
	W0914 15:25:18.217026    5919 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 15:25:18.217053    5919 start.go:703] Will try again in 5 seconds ...
	I0914 15:25:23.212325    5919 start.go:365] acquiring machines lock for embed-certs-546000: {Name:mk831f2bd5a9a0c932c8a59972319d68edb53d07 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 15:25:23.212790    5919 start.go:369] acquired machines lock for "embed-certs-546000" in 370.833µs
	I0914 15:25:23.212941    5919 start.go:93] Provisioning new machine with config: &{Name:embed-certs-546000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.1 ClusterName:embed-certs-546000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mount
MSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 15:25:23.213208    5919 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 15:25:23.227654    5919 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0914 15:25:23.275521    5919 start.go:159] libmachine.API.Create for "embed-certs-546000" (driver="qemu2")
	I0914 15:25:23.275603    5919 client.go:168] LocalClient.Create starting
	I0914 15:25:23.275716    5919 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/ca.pem
	I0914 15:25:23.275767    5919 main.go:141] libmachine: Decoding PEM data...
	I0914 15:25:23.275788    5919 main.go:141] libmachine: Parsing certificate...
	I0914 15:25:23.275869    5919 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/cert.pem
	I0914 15:25:23.275914    5919 main.go:141] libmachine: Decoding PEM data...
	I0914 15:25:23.275930    5919 main.go:141] libmachine: Parsing certificate...
	I0914 15:25:23.276456    5919 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17243-1006/.minikube/cache/iso/arm64/minikube-v1.31.0-1694625400-17243-arm64.iso...
	I0914 15:25:23.403070    5919 main.go:141] libmachine: Creating SSH key...
	I0914 15:25:23.496296    5919 main.go:141] libmachine: Creating Disk image...
	I0914 15:25:23.496301    5919 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 15:25:23.496449    5919 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/embed-certs-546000/disk.qcow2.raw /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/embed-certs-546000/disk.qcow2
	I0914 15:25:23.505103    5919 main.go:141] libmachine: STDOUT: 
	I0914 15:25:23.505121    5919 main.go:141] libmachine: STDERR: 
	I0914 15:25:23.505180    5919 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/embed-certs-546000/disk.qcow2 +20000M
	I0914 15:25:23.512340    5919 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 15:25:23.512354    5919 main.go:141] libmachine: STDERR: 
	I0914 15:25:23.512365    5919 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/embed-certs-546000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/embed-certs-546000/disk.qcow2
	I0914 15:25:23.512379    5919 main.go:141] libmachine: Starting QEMU VM...
	I0914 15:25:23.512423    5919 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/embed-certs-546000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/embed-certs-546000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/embed-certs-546000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:eb:07:b7:42:69 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/embed-certs-546000/disk.qcow2
	I0914 15:25:23.513958    5919 main.go:141] libmachine: STDOUT: 
	I0914 15:25:23.513973    5919 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 15:25:23.513986    5919 client.go:171] LocalClient.Create took 238.729542ms
	I0914 15:25:25.513349    5919 start.go:128] duration metric: createHost completed in 2.3033615s
	I0914 15:25:25.513407    5919 start.go:83] releasing machines lock for "embed-certs-546000", held for 2.303844s
	W0914 15:25:25.513829    5919 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-546000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-546000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 15:25:25.527403    5919 out.go:177] 
	W0914 15:25:25.532608    5919 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 15:25:25.532660    5919 out.go:239] * 
	* 
	W0914 15:25:25.535649    5919 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 15:25:25.544237    5919 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-546000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-546000 -n embed-certs-546000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-546000 -n embed-certs-546000: exit status 7 (52.501625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-546000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (9.87s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-399000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-399000 --alsologtostderr -v=1: exit status 89 (49.727834ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p no-preload-399000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 15:25:15.792779    5921 out.go:296] Setting OutFile to fd 1 ...
	I0914 15:25:15.792957    5921 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 15:25:15.792960    5921 out.go:309] Setting ErrFile to fd 2...
	I0914 15:25:15.792963    5921 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 15:25:15.793105    5921 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17243-1006/.minikube/bin
	I0914 15:25:15.793338    5921 out.go:303] Setting JSON to false
	I0914 15:25:15.793348    5921 mustload.go:65] Loading cluster: no-preload-399000
	I0914 15:25:15.793539    5921 config.go:182] Loaded profile config "no-preload-399000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0914 15:25:15.797899    5921 out.go:177] * The control plane node must be running for this command
	I0914 15:25:15.805888    5921 out.go:177]   To start a cluster, run: "minikube start -p no-preload-399000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-399000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-399000 -n no-preload-399000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-399000 -n no-preload-399000: exit status 7 (30.510167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-399000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-399000 -n no-preload-399000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-399000 -n no-preload-399000: exit status 7 (33.697875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-399000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (11.51s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-850000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-850000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.1: exit status 80 (11.437728375s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-850000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17243
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17243-1006/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17243-1006/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node default-k8s-diff-port-850000 in cluster default-k8s-diff-port-850000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-850000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 15:25:16.543963    5965 out.go:296] Setting OutFile to fd 1 ...
	I0914 15:25:16.544117    5965 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 15:25:16.544120    5965 out.go:309] Setting ErrFile to fd 2...
	I0914 15:25:16.544123    5965 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 15:25:16.544255    5965 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17243-1006/.minikube/bin
	I0914 15:25:16.545314    5965 out.go:303] Setting JSON to false
	I0914 15:25:16.561259    5965 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3290,"bootTime":1694727026,"procs":411,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0914 15:25:16.561337    5965 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0914 15:25:16.566207    5965 out.go:177] * [default-k8s-diff-port-850000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0914 15:25:16.573017    5965 out.go:177]   - MINIKUBE_LOCATION=17243
	I0914 15:25:16.576093    5965 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17243-1006/kubeconfig
	I0914 15:25:16.573059    5965 notify.go:220] Checking for updates...
	I0914 15:25:16.582986    5965 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 15:25:16.586042    5965 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 15:25:16.589137    5965 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17243-1006/.minikube
	I0914 15:25:16.591963    5965 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 15:25:16.595409    5965 config.go:182] Loaded profile config "embed-certs-546000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0914 15:25:16.595494    5965 config.go:182] Loaded profile config "multinode-463000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0914 15:25:16.595538    5965 driver.go:373] Setting default libvirt URI to qemu:///system
	I0914 15:25:16.600044    5965 out.go:177] * Using the qemu2 driver based on user configuration
	I0914 15:25:16.607018    5965 start.go:298] selected driver: qemu2
	I0914 15:25:16.607023    5965 start.go:902] validating driver "qemu2" against <nil>
	I0914 15:25:16.607029    5965 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 15:25:16.609042    5965 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0914 15:25:16.612032    5965 out.go:177] * Automatically selected the socket_vmnet network
	I0914 15:25:16.615093    5965 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 15:25:16.615125    5965 cni.go:84] Creating CNI manager for ""
	I0914 15:25:16.615136    5965 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0914 15:25:16.615148    5965 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0914 15:25:16.615154    5965 start_flags.go:321] config:
	{Name:default-k8s-diff-port-850000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:default-k8s-diff-port-850000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Stat
icIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 15:25:16.619378    5965 iso.go:125] acquiring lock: {Name:mkd5b68b252c18146a825ff5365883948b3c6983 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 15:25:16.626042    5965 out.go:177] * Starting control plane node default-k8s-diff-port-850000 in cluster default-k8s-diff-port-850000
	I0914 15:25:16.630007    5965 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0914 15:25:16.630028    5965 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0914 15:25:16.630044    5965 cache.go:57] Caching tarball of preloaded images
	I0914 15:25:16.630110    5965 preload.go:174] Found /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0914 15:25:16.630122    5965 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0914 15:25:16.630193    5965 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/default-k8s-diff-port-850000/config.json ...
	I0914 15:25:16.630216    5965 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/default-k8s-diff-port-850000/config.json: {Name:mk8cec73627863b98ceb2a88f9be697d69162b77 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 15:25:16.630431    5965 start.go:365] acquiring machines lock for default-k8s-diff-port-850000: {Name:mk831f2bd5a9a0c932c8a59972319d68edb53d07 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 15:25:18.188925    5965 start.go:369] acquired machines lock for "default-k8s-diff-port-850000" in 1.558449333s
	I0914 15:25:18.189100    5965 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-850000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22
KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:default-k8s-diff-port-850000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:
docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 15:25:18.189337    5965 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 15:25:18.194949    5965 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0914 15:25:18.238017    5965 start.go:159] libmachine.API.Create for "default-k8s-diff-port-850000" (driver="qemu2")
	I0914 15:25:18.238061    5965 client.go:168] LocalClient.Create starting
	I0914 15:25:18.238213    5965 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/ca.pem
	I0914 15:25:18.238269    5965 main.go:141] libmachine: Decoding PEM data...
	I0914 15:25:18.238292    5965 main.go:141] libmachine: Parsing certificate...
	I0914 15:25:18.238349    5965 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/cert.pem
	I0914 15:25:18.238384    5965 main.go:141] libmachine: Decoding PEM data...
	I0914 15:25:18.238398    5965 main.go:141] libmachine: Parsing certificate...
	I0914 15:25:18.239003    5965 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17243-1006/.minikube/cache/iso/arm64/minikube-v1.31.0-1694625400-17243-arm64.iso...
	I0914 15:25:18.371123    5965 main.go:141] libmachine: Creating SSH key...
	I0914 15:25:18.412825    5965 main.go:141] libmachine: Creating Disk image...
	I0914 15:25:18.412831    5965 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 15:25:18.412983    5965 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/default-k8s-diff-port-850000/disk.qcow2.raw /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/default-k8s-diff-port-850000/disk.qcow2
	I0914 15:25:18.421521    5965 main.go:141] libmachine: STDOUT: 
	I0914 15:25:18.421535    5965 main.go:141] libmachine: STDERR: 
	I0914 15:25:18.421586    5965 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/default-k8s-diff-port-850000/disk.qcow2 +20000M
	I0914 15:25:18.428668    5965 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 15:25:18.428678    5965 main.go:141] libmachine: STDERR: 
	I0914 15:25:18.428696    5965 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/default-k8s-diff-port-850000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/default-k8s-diff-port-850000/disk.qcow2
	I0914 15:25:18.428711    5965 main.go:141] libmachine: Starting QEMU VM...
	I0914 15:25:18.428739    5965 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/default-k8s-diff-port-850000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/default-k8s-diff-port-850000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/default-k8s-diff-port-850000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:70:e2:a0:d0:bd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/default-k8s-diff-port-850000/disk.qcow2
	I0914 15:25:18.430216    5965 main.go:141] libmachine: STDOUT: 
	I0914 15:25:18.430229    5965 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 15:25:18.430257    5965 client.go:171] LocalClient.Create took 192.182666ms
	I0914 15:25:20.430045    5965 start.go:128] duration metric: createHost completed in 2.243065875s
	I0914 15:25:20.430114    5965 start.go:83] releasing machines lock for "default-k8s-diff-port-850000", held for 2.243532875s
	W0914 15:25:20.430181    5965 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 15:25:20.442330    5965 out.go:177] * Deleting "default-k8s-diff-port-850000" in qemu2 ...
	W0914 15:25:20.466167    5965 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 15:25:20.466204    5965 start.go:703] Will try again in 5 seconds ...
	I0914 15:25:25.460690    5965 start.go:365] acquiring machines lock for default-k8s-diff-port-850000: {Name:mk831f2bd5a9a0c932c8a59972319d68edb53d07 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 15:25:25.513511    5965 start.go:369] acquired machines lock for "default-k8s-diff-port-850000" in 52.781875ms
	I0914 15:25:25.513692    5965 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-850000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22
KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:default-k8s-diff-port-850000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:
docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 15:25:25.513986    5965 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 15:25:25.524454    5965 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0914 15:25:25.572005    5965 start.go:159] libmachine.API.Create for "default-k8s-diff-port-850000" (driver="qemu2")
	I0914 15:25:25.572060    5965 client.go:168] LocalClient.Create starting
	I0914 15:25:25.572163    5965 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/ca.pem
	I0914 15:25:25.572214    5965 main.go:141] libmachine: Decoding PEM data...
	I0914 15:25:25.572237    5965 main.go:141] libmachine: Parsing certificate...
	I0914 15:25:25.572312    5965 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/cert.pem
	I0914 15:25:25.572344    5965 main.go:141] libmachine: Decoding PEM data...
	I0914 15:25:25.572359    5965 main.go:141] libmachine: Parsing certificate...
	I0914 15:25:25.572908    5965 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17243-1006/.minikube/cache/iso/arm64/minikube-v1.31.0-1694625400-17243-arm64.iso...
	I0914 15:25:25.704851    5965 main.go:141] libmachine: Creating SSH key...
	I0914 15:25:25.862807    5965 main.go:141] libmachine: Creating Disk image...
	I0914 15:25:25.862815    5965 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 15:25:25.863425    5965 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/default-k8s-diff-port-850000/disk.qcow2.raw /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/default-k8s-diff-port-850000/disk.qcow2
	I0914 15:25:25.876802    5965 main.go:141] libmachine: STDOUT: 
	I0914 15:25:25.876822    5965 main.go:141] libmachine: STDERR: 
	I0914 15:25:25.876873    5965 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/default-k8s-diff-port-850000/disk.qcow2 +20000M
	I0914 15:25:25.888287    5965 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 15:25:25.888301    5965 main.go:141] libmachine: STDERR: 
	I0914 15:25:25.888315    5965 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/default-k8s-diff-port-850000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/default-k8s-diff-port-850000/disk.qcow2
	I0914 15:25:25.888321    5965 main.go:141] libmachine: Starting QEMU VM...
	I0914 15:25:25.888367    5965 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/default-k8s-diff-port-850000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/default-k8s-diff-port-850000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/default-k8s-diff-port-850000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:21:49:c8:33:5a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/default-k8s-diff-port-850000/disk.qcow2
	I0914 15:25:25.890219    5965 main.go:141] libmachine: STDOUT: 
	I0914 15:25:25.890234    5965 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 15:25:25.890248    5965 client.go:171] LocalClient.Create took 318.595667ms
	I0914 15:25:27.890094    5965 start.go:128] duration metric: createHost completed in 2.378937041s
	I0914 15:25:27.890164    5965 start.go:83] releasing machines lock for "default-k8s-diff-port-850000", held for 2.379530625s
	W0914 15:25:27.890513    5965 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-850000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-850000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 15:25:27.907085    5965 out.go:177] 
	W0914 15:25:27.915228    5965 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 15:25:27.915269    5965 out.go:239] * 
	* 
	W0914 15:25:27.917317    5965 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 15:25:27.925977    5965 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-850000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-850000 -n default-k8s-diff-port-850000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-850000 -n default-k8s-diff-port-850000: exit status 7 (64.742458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-850000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (11.51s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-546000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-546000 create -f testdata/busybox.yaml: exit status 1 (31.481125ms)

                                                
                                                
** stderr ** 
	error: no openapi getter

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-546000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-546000 -n embed-certs-546000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-546000 -n embed-certs-546000: exit status 7 (32.556875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-546000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-546000 -n embed-certs-546000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-546000 -n embed-certs-546000: exit status 7 (34.093291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-546000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-546000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-546000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-546000 describe deploy/metrics-server -n kube-system: exit status 1 (27.51025ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-546000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-546000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-546000 -n embed-certs-546000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-546000 -n embed-certs-546000: exit status 7 (30.264833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-546000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (7.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-546000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-546000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.1: exit status 80 (7.000488417s)

                                                
                                                
-- stdout --
	* [embed-certs-546000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17243
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17243-1006/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17243-1006/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node embed-certs-546000 in cluster embed-certs-546000
	* Restarting existing qemu2 VM for "embed-certs-546000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-546000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 15:25:26.008824    6001 out.go:296] Setting OutFile to fd 1 ...
	I0914 15:25:26.008951    6001 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 15:25:26.008954    6001 out.go:309] Setting ErrFile to fd 2...
	I0914 15:25:26.008957    6001 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 15:25:26.009082    6001 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17243-1006/.minikube/bin
	I0914 15:25:26.010021    6001 out.go:303] Setting JSON to false
	I0914 15:25:26.024936    6001 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3300,"bootTime":1694727026,"procs":410,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0914 15:25:26.025009    6001 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0914 15:25:26.028456    6001 out.go:177] * [embed-certs-546000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0914 15:25:26.035437    6001 out.go:177]   - MINIKUBE_LOCATION=17243
	I0914 15:25:26.035478    6001 notify.go:220] Checking for updates...
	I0914 15:25:26.043483    6001 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17243-1006/kubeconfig
	I0914 15:25:26.046409    6001 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 15:25:26.047730    6001 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 15:25:26.050378    6001 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17243-1006/.minikube
	I0914 15:25:26.053366    6001 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 15:25:26.056669    6001 config.go:182] Loaded profile config "embed-certs-546000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0914 15:25:26.056944    6001 driver.go:373] Setting default libvirt URI to qemu:///system
	I0914 15:25:26.061302    6001 out.go:177] * Using the qemu2 driver based on existing profile
	I0914 15:25:26.068331    6001 start.go:298] selected driver: qemu2
	I0914 15:25:26.068335    6001 start.go:902] validating driver "qemu2" against &{Name:embed-certs-546000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.1 ClusterName:embed-certs-546000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested
:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 15:25:26.068396    6001 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 15:25:26.070303    6001 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 15:25:26.070333    6001 cni.go:84] Creating CNI manager for ""
	I0914 15:25:26.070340    6001 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0914 15:25:26.070345    6001 start_flags.go:321] config:
	{Name:embed-certs-546000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:embed-certs-546000 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/m
inikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 15:25:26.074341    6001 iso.go:125] acquiring lock: {Name:mkd5b68b252c18146a825ff5365883948b3c6983 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 15:25:26.082376    6001 out.go:177] * Starting control plane node embed-certs-546000 in cluster embed-certs-546000
	I0914 15:25:26.086255    6001 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0914 15:25:26.086272    6001 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0914 15:25:26.086282    6001 cache.go:57] Caching tarball of preloaded images
	I0914 15:25:26.086344    6001 preload.go:174] Found /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0914 15:25:26.086349    6001 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0914 15:25:26.086414    6001 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/embed-certs-546000/config.json ...
	I0914 15:25:26.086816    6001 start.go:365] acquiring machines lock for embed-certs-546000: {Name:mk831f2bd5a9a0c932c8a59972319d68edb53d07 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 15:25:27.890318    6001 start.go:369] acquired machines lock for "embed-certs-546000" in 1.805619458s
	I0914 15:25:27.890488    6001 start.go:96] Skipping create...Using existing machine configuration
	I0914 15:25:27.890528    6001 fix.go:54] fixHost starting: 
	I0914 15:25:27.891256    6001 fix.go:102] recreateIfNeeded on embed-certs-546000: state=Stopped err=<nil>
	W0914 15:25:27.891295    6001 fix.go:128] unexpected machine state, will restart: <nil>
	I0914 15:25:27.903013    6001 out.go:177] * Restarting existing qemu2 VM for "embed-certs-546000" ...
	I0914 15:25:27.911272    6001 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/embed-certs-546000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/embed-certs-546000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/embed-certs-546000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:eb:07:b7:42:69 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/embed-certs-546000/disk.qcow2
	I0914 15:25:27.919947    6001 main.go:141] libmachine: STDOUT: 
	I0914 15:25:27.920000    6001 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 15:25:27.920151    6001 fix.go:56] fixHost completed within 29.654083ms
	I0914 15:25:27.920172    6001 start.go:83] releasing machines lock for "embed-certs-546000", held for 29.849458ms
	W0914 15:25:27.920213    6001 start.go:688] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 15:25:27.920356    6001 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 15:25:27.920372    6001 start.go:703] Will try again in 5 seconds ...
	I0914 15:25:32.917790    6001 start.go:365] acquiring machines lock for embed-certs-546000: {Name:mk831f2bd5a9a0c932c8a59972319d68edb53d07 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 15:25:32.918277    6001 start.go:369] acquired machines lock for "embed-certs-546000" in 379.75µs
	I0914 15:25:32.918446    6001 start.go:96] Skipping create...Using existing machine configuration
	I0914 15:25:32.918467    6001 fix.go:54] fixHost starting: 
	I0914 15:25:32.919308    6001 fix.go:102] recreateIfNeeded on embed-certs-546000: state=Stopped err=<nil>
	W0914 15:25:32.919334    6001 fix.go:128] unexpected machine state, will restart: <nil>
	I0914 15:25:32.924694    6001 out.go:177] * Restarting existing qemu2 VM for "embed-certs-546000" ...
	I0914 15:25:32.930207    6001 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/embed-certs-546000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/embed-certs-546000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/embed-certs-546000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:eb:07:b7:42:69 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/embed-certs-546000/disk.qcow2
	I0914 15:25:32.939107    6001 main.go:141] libmachine: STDOUT: 
	I0914 15:25:32.939164    6001 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 15:25:32.939248    6001 fix.go:56] fixHost completed within 20.802666ms
	I0914 15:25:32.939267    6001 start.go:83] releasing machines lock for "embed-certs-546000", held for 20.98525ms
	W0914 15:25:32.939558    6001 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-546000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-546000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 15:25:32.946805    6001 out.go:177] 
	W0914 15:25:32.950932    6001 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 15:25:32.950980    6001 out.go:239] * 
	* 
	W0914 15:25:32.953914    6001 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 15:25:32.961687    6001 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-546000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-546000 -n embed-certs-546000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-546000 -n embed-certs-546000: exit status 7 (66.108833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-546000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (7.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-850000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-850000 create -f testdata/busybox.yaml: exit status 1 (30.674708ms)

                                                
                                                
** stderr ** 
	error: no openapi getter

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-850000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-850000 -n default-k8s-diff-port-850000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-850000 -n default-k8s-diff-port-850000: exit status 7 (29.258417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-850000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-850000 -n default-k8s-diff-port-850000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-850000 -n default-k8s-diff-port-850000: exit status 7 (28.28875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-850000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-850000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-850000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-850000 describe deploy/metrics-server -n kube-system: exit status 1 (25.983125ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-850000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-850000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-850000 -n default-k8s-diff-port-850000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-850000 -n default-k8s-diff-port-850000: exit status 7 (34.899792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-850000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-850000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-850000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.1: exit status 80 (5.175825917s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-850000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17243
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17243-1006/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17243-1006/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node default-k8s-diff-port-850000 in cluster default-k8s-diff-port-850000
	* Restarting existing qemu2 VM for "default-k8s-diff-port-850000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-850000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 15:25:28.395707    6026 out.go:296] Setting OutFile to fd 1 ...
	I0914 15:25:28.395810    6026 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 15:25:28.395813    6026 out.go:309] Setting ErrFile to fd 2...
	I0914 15:25:28.395815    6026 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 15:25:28.395961    6026 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17243-1006/.minikube/bin
	I0914 15:25:28.396940    6026 out.go:303] Setting JSON to false
	I0914 15:25:28.412154    6026 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3302,"bootTime":1694727026,"procs":410,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0914 15:25:28.412244    6026 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0914 15:25:28.416922    6026 out.go:177] * [default-k8s-diff-port-850000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0914 15:25:28.423897    6026 out.go:177]   - MINIKUBE_LOCATION=17243
	I0914 15:25:28.427912    6026 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17243-1006/kubeconfig
	I0914 15:25:28.423942    6026 notify.go:220] Checking for updates...
	I0914 15:25:28.435856    6026 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 15:25:28.438911    6026 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 15:25:28.441889    6026 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17243-1006/.minikube
	I0914 15:25:28.444880    6026 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 15:25:28.448202    6026 config.go:182] Loaded profile config "default-k8s-diff-port-850000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0914 15:25:28.448452    6026 driver.go:373] Setting default libvirt URI to qemu:///system
	I0914 15:25:28.452881    6026 out.go:177] * Using the qemu2 driver based on existing profile
	I0914 15:25:28.459870    6026 start.go:298] selected driver: qemu2
	I0914 15:25:28.459875    6026 start.go:902] validating driver "qemu2" against &{Name:default-k8s-diff-port-850000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:default-k8s-diff-port-850000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subne
t: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 15:25:28.459942    6026 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 15:25:28.461917    6026 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 15:25:28.461946    6026 cni.go:84] Creating CNI manager for ""
	I0914 15:25:28.461953    6026 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0914 15:25:28.461959    6026 start_flags.go:321] config:
	{Name:default-k8s-diff-port-850000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:default-k8s-diff-port-8500
00 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 15:25:28.466126    6026 iso.go:125] acquiring lock: {Name:mkd5b68b252c18146a825ff5365883948b3c6983 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 15:25:28.473784    6026 out.go:177] * Starting control plane node default-k8s-diff-port-850000 in cluster default-k8s-diff-port-850000
	I0914 15:25:28.481735    6026 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0914 15:25:28.481757    6026 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0914 15:25:28.481768    6026 cache.go:57] Caching tarball of preloaded images
	I0914 15:25:28.481839    6026 preload.go:174] Found /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0914 15:25:28.481844    6026 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0914 15:25:28.481926    6026 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/default-k8s-diff-port-850000/config.json ...
	I0914 15:25:28.482306    6026 start.go:365] acquiring machines lock for default-k8s-diff-port-850000: {Name:mk831f2bd5a9a0c932c8a59972319d68edb53d07 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 15:25:28.482335    6026 start.go:369] acquired machines lock for "default-k8s-diff-port-850000" in 20.958µs
	I0914 15:25:28.482344    6026 start.go:96] Skipping create...Using existing machine configuration
	I0914 15:25:28.482349    6026 fix.go:54] fixHost starting: 
	I0914 15:25:28.482471    6026 fix.go:102] recreateIfNeeded on default-k8s-diff-port-850000: state=Stopped err=<nil>
	W0914 15:25:28.482479    6026 fix.go:128] unexpected machine state, will restart: <nil>
	I0914 15:25:28.486877    6026 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-850000" ...
	I0914 15:25:28.493942    6026 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/default-k8s-diff-port-850000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/default-k8s-diff-port-850000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/default-k8s-diff-port-850000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:21:49:c8:33:5a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/default-k8s-diff-port-850000/disk.qcow2
	I0914 15:25:28.495936    6026 main.go:141] libmachine: STDOUT: 
	I0914 15:25:28.495954    6026 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 15:25:28.495987    6026 fix.go:56] fixHost completed within 13.651375ms
	I0914 15:25:28.495992    6026 start.go:83] releasing machines lock for "default-k8s-diff-port-850000", held for 13.668041ms
	W0914 15:25:28.495999    6026 start.go:688] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 15:25:28.496029    6026 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 15:25:28.496033    6026 start.go:703] Will try again in 5 seconds ...
	I0914 15:25:33.493398    6026 start.go:365] acquiring machines lock for default-k8s-diff-port-850000: {Name:mk831f2bd5a9a0c932c8a59972319d68edb53d07 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 15:25:33.493461    6026 start.go:369] acquired machines lock for "default-k8s-diff-port-850000" in 49.459µs
	I0914 15:25:33.493473    6026 start.go:96] Skipping create...Using existing machine configuration
	I0914 15:25:33.493476    6026 fix.go:54] fixHost starting: 
	I0914 15:25:33.493606    6026 fix.go:102] recreateIfNeeded on default-k8s-diff-port-850000: state=Stopped err=<nil>
	W0914 15:25:33.493611    6026 fix.go:128] unexpected machine state, will restart: <nil>
	I0914 15:25:33.499887    6026 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-850000" ...
	I0914 15:25:33.507994    6026 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/default-k8s-diff-port-850000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/default-k8s-diff-port-850000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/default-k8s-diff-port-850000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:21:49:c8:33:5a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/default-k8s-diff-port-850000/disk.qcow2
	I0914 15:25:33.509880    6026 main.go:141] libmachine: STDOUT: 
	I0914 15:25:33.509895    6026 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 15:25:33.509919    6026 fix.go:56] fixHost completed within 16.454709ms
	I0914 15:25:33.509924    6026 start.go:83] releasing machines lock for "default-k8s-diff-port-850000", held for 16.471959ms
	W0914 15:25:33.509978    6026 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-850000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-850000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 15:25:33.516870    6026 out.go:177] 
	W0914 15:25:33.519888    6026 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 15:25:33.519896    6026 out.go:239] * 
	* 
	W0914 15:25:33.520359    6026 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 15:25:33.534852    6026 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-850000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-850000 -n default-k8s-diff-port-850000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-850000 -n default-k8s-diff-port-850000: exit status 7 (31.613709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-850000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-546000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-546000 -n embed-certs-546000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-546000 -n embed-certs-546000: exit status 7 (31.614416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-546000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-546000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-546000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-546000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.042292ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-546000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-546000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-546000 -n embed-certs-546000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-546000 -n embed-certs-546000: exit status 7 (29.52025ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-546000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p embed-certs-546000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p embed-certs-546000 "sudo crictl images -o json": exit status 89 (38.504083ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p embed-certs-546000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p embed-certs-546000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p embed-certs-546000"
start_stop_delete_test.go:304: v1.28.1 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.9-0",
- 	"registry.k8s.io/kube-apiserver:v1.28.1",
- 	"registry.k8s.io/kube-controller-manager:v1.28.1",
- 	"registry.k8s.io/kube-proxy:v1.28.1",
- 	"registry.k8s.io/kube-scheduler:v1.28.1",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-546000 -n embed-certs-546000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-546000 -n embed-certs-546000: exit status 7 (27.588834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-546000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-546000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-546000 --alsologtostderr -v=1: exit status 89 (40.933583ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p embed-certs-546000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 15:25:33.223129    6045 out.go:296] Setting OutFile to fd 1 ...
	I0914 15:25:33.223518    6045 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 15:25:33.223522    6045 out.go:309] Setting ErrFile to fd 2...
	I0914 15:25:33.223525    6045 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 15:25:33.223848    6045 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17243-1006/.minikube/bin
	I0914 15:25:33.224262    6045 out.go:303] Setting JSON to false
	I0914 15:25:33.224279    6045 mustload.go:65] Loading cluster: embed-certs-546000
	I0914 15:25:33.224493    6045 config.go:182] Loaded profile config "embed-certs-546000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0914 15:25:33.229169    6045 out.go:177] * The control plane node must be running for this command
	I0914 15:25:33.233240    6045 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-546000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-546000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-546000 -n embed-certs-546000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-546000 -n embed-certs-546000: exit status 7 (28.250958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-546000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-546000 -n embed-certs-546000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-546000 -n embed-certs-546000: exit status 7 (28.176167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-546000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-850000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-850000 -n default-k8s-diff-port-850000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-850000 -n default-k8s-diff-port-850000: exit status 7 (30.558292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-850000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-850000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-850000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-850000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.973959ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-850000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-850000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-850000 -n default-k8s-diff-port-850000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-850000 -n default-k8s-diff-port-850000: exit status 7 (30.119667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-850000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p default-k8s-diff-port-850000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p default-k8s-diff-port-850000 "sudo crictl images -o json": exit status 89 (43.96925ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-850000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p default-k8s-diff-port-850000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p default-k8s-diff-port-850000"
start_stop_delete_test.go:304: v1.28.1 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.9-0",
- 	"registry.k8s.io/kube-apiserver:v1.28.1",
- 	"registry.k8s.io/kube-controller-manager:v1.28.1",
- 	"registry.k8s.io/kube-proxy:v1.28.1",
- 	"registry.k8s.io/kube-scheduler:v1.28.1",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-850000 -n default-k8s-diff-port-850000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-850000 -n default-k8s-diff-port-850000: exit status 7 (34.222083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-850000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (9.87s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-495000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.28.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-495000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.28.1: exit status 80 (9.79850175s)

                                                
                                                
-- stdout --
	* [newest-cni-495000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17243
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17243-1006/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17243-1006/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node newest-cni-495000 in cluster newest-cni-495000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-495000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 15:25:33.698894    6077 out.go:296] Setting OutFile to fd 1 ...
	I0914 15:25:33.699120    6077 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 15:25:33.699124    6077 out.go:309] Setting ErrFile to fd 2...
	I0914 15:25:33.699127    6077 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 15:25:33.699279    6077 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17243-1006/.minikube/bin
	I0914 15:25:33.702093    6077 out.go:303] Setting JSON to false
	I0914 15:25:33.718430    6077 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3307,"bootTime":1694727026,"procs":410,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0914 15:25:33.718521    6077 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0914 15:25:33.722687    6077 out.go:177] * [newest-cni-495000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0914 15:25:33.729753    6077 out.go:177]   - MINIKUBE_LOCATION=17243
	I0914 15:25:33.729775    6077 notify.go:220] Checking for updates...
	I0914 15:25:33.739761    6077 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17243-1006/kubeconfig
	I0914 15:25:33.746694    6077 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 15:25:33.750705    6077 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 15:25:33.753634    6077 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17243-1006/.minikube
	I0914 15:25:33.756670    6077 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 15:25:33.760060    6077 config.go:182] Loaded profile config "default-k8s-diff-port-850000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0914 15:25:33.760139    6077 config.go:182] Loaded profile config "multinode-463000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0914 15:25:33.760182    6077 driver.go:373] Setting default libvirt URI to qemu:///system
	I0914 15:25:33.763639    6077 out.go:177] * Using the qemu2 driver based on user configuration
	I0914 15:25:33.770718    6077 start.go:298] selected driver: qemu2
	I0914 15:25:33.770724    6077 start.go:902] validating driver "qemu2" against <nil>
	I0914 15:25:33.770738    6077 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 15:25:33.772513    6077 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	W0914 15:25:33.772535    6077 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0914 15:25:33.777714    6077 out.go:177] * Automatically selected the socket_vmnet network
	I0914 15:25:33.785733    6077 start_flags.go:941] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0914 15:25:33.785757    6077 cni.go:84] Creating CNI manager for ""
	I0914 15:25:33.785763    6077 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0914 15:25:33.785767    6077 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0914 15:25:33.785772    6077 start_flags.go:321] config:
	{Name:newest-cni-495000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:newest-cni-495000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:d
ocker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/s
ocket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 15:25:33.789499    6077 iso.go:125] acquiring lock: {Name:mkd5b68b252c18146a825ff5365883948b3c6983 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 15:25:33.796681    6077 out.go:177] * Starting control plane node newest-cni-495000 in cluster newest-cni-495000
	I0914 15:25:33.800659    6077 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0914 15:25:33.800685    6077 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0914 15:25:33.800697    6077 cache.go:57] Caching tarball of preloaded images
	I0914 15:25:33.800761    6077 preload.go:174] Found /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0914 15:25:33.800769    6077 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0914 15:25:33.800828    6077 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/newest-cni-495000/config.json ...
	I0914 15:25:33.800842    6077 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/newest-cni-495000/config.json: {Name:mkbf8992d2d7e50b207741c3eb511c431dc868fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 15:25:33.801047    6077 start.go:365] acquiring machines lock for newest-cni-495000: {Name:mk831f2bd5a9a0c932c8a59972319d68edb53d07 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 15:25:33.801073    6077 start.go:369] acquired machines lock for "newest-cni-495000" in 20.458µs
	I0914 15:25:33.801085    6077 start.go:93] Provisioning new machine with config: &{Name:newest-cni-495000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.1 ClusterName:newest-cni-495000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountS
tring:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 15:25:33.801119    6077 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 15:25:33.808693    6077 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0914 15:25:33.823129    6077 start.go:159] libmachine.API.Create for "newest-cni-495000" (driver="qemu2")
	I0914 15:25:33.823156    6077 client.go:168] LocalClient.Create starting
	I0914 15:25:33.823219    6077 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/ca.pem
	I0914 15:25:33.823244    6077 main.go:141] libmachine: Decoding PEM data...
	I0914 15:25:33.823257    6077 main.go:141] libmachine: Parsing certificate...
	I0914 15:25:33.823301    6077 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/cert.pem
	I0914 15:25:33.823318    6077 main.go:141] libmachine: Decoding PEM data...
	I0914 15:25:33.823326    6077 main.go:141] libmachine: Parsing certificate...
	I0914 15:25:33.823682    6077 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17243-1006/.minikube/cache/iso/arm64/minikube-v1.31.0-1694625400-17243-arm64.iso...
	I0914 15:25:33.996655    6077 main.go:141] libmachine: Creating SSH key...
	I0914 15:25:34.119544    6077 main.go:141] libmachine: Creating Disk image...
	I0914 15:25:34.119556    6077 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 15:25:34.119824    6077 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/newest-cni-495000/disk.qcow2.raw /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/newest-cni-495000/disk.qcow2
	I0914 15:25:34.128674    6077 main.go:141] libmachine: STDOUT: 
	I0914 15:25:34.128694    6077 main.go:141] libmachine: STDERR: 
	I0914 15:25:34.128772    6077 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/newest-cni-495000/disk.qcow2 +20000M
	I0914 15:25:34.136666    6077 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 15:25:34.136682    6077 main.go:141] libmachine: STDERR: 
	I0914 15:25:34.136703    6077 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/newest-cni-495000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/newest-cni-495000/disk.qcow2
	I0914 15:25:34.136718    6077 main.go:141] libmachine: Starting QEMU VM...
	I0914 15:25:34.136759    6077 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/newest-cni-495000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/newest-cni-495000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/newest-cni-495000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:b3:bb:6d:4d:d3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/newest-cni-495000/disk.qcow2
	I0914 15:25:34.138403    6077 main.go:141] libmachine: STDOUT: 
	I0914 15:25:34.138419    6077 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 15:25:34.138437    6077 client.go:171] LocalClient.Create took 315.524709ms
	I0914 15:25:36.139237    6077 start.go:128] duration metric: createHost completed in 2.339792458s
	I0914 15:25:36.139323    6077 start.go:83] releasing machines lock for "newest-cni-495000", held for 2.339946625s
	W0914 15:25:36.139382    6077 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 15:25:36.149726    6077 out.go:177] * Deleting "newest-cni-495000" in qemu2 ...
	W0914 15:25:36.169872    6077 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 15:25:36.169903    6077 start.go:703] Will try again in 5 seconds ...
	I0914 15:25:41.169314    6077 start.go:365] acquiring machines lock for newest-cni-495000: {Name:mk831f2bd5a9a0c932c8a59972319d68edb53d07 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 15:25:41.169818    6077 start.go:369] acquired machines lock for "newest-cni-495000" in 419.333µs
	I0914 15:25:41.169988    6077 start.go:93] Provisioning new machine with config: &{Name:newest-cni-495000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.1 ClusterName:newest-cni-495000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountS
tring:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 15:25:41.170254    6077 start.go:125] createHost starting for "" (driver="qemu2")
	I0914 15:25:41.179889    6077 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0914 15:25:41.226858    6077 start.go:159] libmachine.API.Create for "newest-cni-495000" (driver="qemu2")
	I0914 15:25:41.226897    6077 client.go:168] LocalClient.Create starting
	I0914 15:25:41.227052    6077 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/ca.pem
	I0914 15:25:41.227115    6077 main.go:141] libmachine: Decoding PEM data...
	I0914 15:25:41.227131    6077 main.go:141] libmachine: Parsing certificate...
	I0914 15:25:41.227219    6077 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17243-1006/.minikube/certs/cert.pem
	I0914 15:25:41.227257    6077 main.go:141] libmachine: Decoding PEM data...
	I0914 15:25:41.227272    6077 main.go:141] libmachine: Parsing certificate...
	I0914 15:25:41.227778    6077 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17243-1006/.minikube/cache/iso/arm64/minikube-v1.31.0-1694625400-17243-arm64.iso...
	I0914 15:25:41.353227    6077 main.go:141] libmachine: Creating SSH key...
	I0914 15:25:41.407066    6077 main.go:141] libmachine: Creating Disk image...
	I0914 15:25:41.407072    6077 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0914 15:25:41.407211    6077 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/newest-cni-495000/disk.qcow2.raw /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/newest-cni-495000/disk.qcow2
	I0914 15:25:41.415779    6077 main.go:141] libmachine: STDOUT: 
	I0914 15:25:41.415796    6077 main.go:141] libmachine: STDERR: 
	I0914 15:25:41.415844    6077 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/newest-cni-495000/disk.qcow2 +20000M
	I0914 15:25:41.423003    6077 main.go:141] libmachine: STDOUT: Image resized.
	
	I0914 15:25:41.423018    6077 main.go:141] libmachine: STDERR: 
	I0914 15:25:41.423030    6077 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/newest-cni-495000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/newest-cni-495000/disk.qcow2
	I0914 15:25:41.423036    6077 main.go:141] libmachine: Starting QEMU VM...
	I0914 15:25:41.423084    6077 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/newest-cni-495000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/newest-cni-495000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/newest-cni-495000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:05:c4:1b:86:e7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/newest-cni-495000/disk.qcow2
	I0914 15:25:41.424693    6077 main.go:141] libmachine: STDOUT: 
	I0914 15:25:41.424707    6077 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 15:25:41.424718    6077 client.go:171] LocalClient.Create took 197.911625ms
	I0914 15:25:43.425997    6077 start.go:128] duration metric: createHost completed in 2.256751125s
	I0914 15:25:43.426091    6077 start.go:83] releasing machines lock for "newest-cni-495000", held for 2.257289792s
	W0914 15:25:43.426726    6077 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-495000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-495000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 15:25:43.434119    6077 out.go:177] 
	W0914 15:25:43.438226    6077 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 15:25:43.438256    6077 out.go:239] * 
	* 
	W0914 15:25:43.440935    6077 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 15:25:43.450070    6077 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-495000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.28.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-495000 -n newest-cni-495000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-495000 -n newest-cni-495000: exit status 7 (67.208459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-495000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (9.87s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-850000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-850000 --alsologtostderr -v=1: exit status 89 (49.94325ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-850000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 15:25:33.766843    6084 out.go:296] Setting OutFile to fd 1 ...
	I0914 15:25:33.766965    6084 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 15:25:33.766969    6084 out.go:309] Setting ErrFile to fd 2...
	I0914 15:25:33.766971    6084 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 15:25:33.767098    6084 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17243-1006/.minikube/bin
	I0914 15:25:33.770751    6084 out.go:303] Setting JSON to false
	I0914 15:25:33.770765    6084 mustload.go:65] Loading cluster: default-k8s-diff-port-850000
	I0914 15:25:33.770971    6084 config.go:182] Loaded profile config "default-k8s-diff-port-850000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0914 15:25:33.773724    6084 out.go:177] * The control plane node must be running for this command
	I0914 15:25:33.785649    6084 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-850000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-850000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-850000 -n default-k8s-diff-port-850000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-850000 -n default-k8s-diff-port-850000: exit status 7 (27.747958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-850000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-850000 -n default-k8s-diff-port-850000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-850000 -n default-k8s-diff-port-850000: exit status 7 (32.173416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-850000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-495000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.28.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-495000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.28.1: exit status 80 (5.181369875s)

                                                
                                                
-- stdout --
	* [newest-cni-495000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17243
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17243-1006/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17243-1006/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node newest-cni-495000 in cluster newest-cni-495000
	* Restarting existing qemu2 VM for "newest-cni-495000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-495000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 15:25:43.776845    6130 out.go:296] Setting OutFile to fd 1 ...
	I0914 15:25:43.776982    6130 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 15:25:43.776985    6130 out.go:309] Setting ErrFile to fd 2...
	I0914 15:25:43.776987    6130 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 15:25:43.777134    6130 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17243-1006/.minikube/bin
	I0914 15:25:43.778129    6130 out.go:303] Setting JSON to false
	I0914 15:25:43.793212    6130 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3317,"bootTime":1694727026,"procs":408,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0914 15:25:43.793302    6130 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0914 15:25:43.797666    6130 out.go:177] * [newest-cni-495000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0914 15:25:43.804664    6130 out.go:177]   - MINIKUBE_LOCATION=17243
	I0914 15:25:43.804741    6130 notify.go:220] Checking for updates...
	I0914 15:25:43.812605    6130 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17243-1006/kubeconfig
	I0914 15:25:43.816632    6130 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 15:25:43.819657    6130 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 15:25:43.822584    6130 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17243-1006/.minikube
	I0914 15:25:43.825636    6130 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 15:25:43.828935    6130 config.go:182] Loaded profile config "newest-cni-495000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0914 15:25:43.829202    6130 driver.go:373] Setting default libvirt URI to qemu:///system
	I0914 15:25:43.833576    6130 out.go:177] * Using the qemu2 driver based on existing profile
	I0914 15:25:43.840614    6130 start.go:298] selected driver: qemu2
	I0914 15:25:43.840619    6130 start.go:902] validating driver "qemu2" against &{Name:newest-cni-495000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.1 ClusterName:newest-cni-495000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<n
il> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 15:25:43.840686    6130 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 15:25:43.842684    6130 start_flags.go:941] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0914 15:25:43.842708    6130 cni.go:84] Creating CNI manager for ""
	I0914 15:25:43.842715    6130 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0914 15:25:43.842721    6130 start_flags.go:321] config:
	{Name:newest-cni-495000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:newest-cni-495000 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeReques
ted:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 15:25:43.846818    6130 iso.go:125] acquiring lock: {Name:mkd5b68b252c18146a825ff5365883948b3c6983 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 15:25:43.849636    6130 out.go:177] * Starting control plane node newest-cni-495000 in cluster newest-cni-495000
	I0914 15:25:43.856597    6130 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0914 15:25:43.856619    6130 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0914 15:25:43.856635    6130 cache.go:57] Caching tarball of preloaded images
	I0914 15:25:43.856708    6130 preload.go:174] Found /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0914 15:25:43.856714    6130 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0914 15:25:43.856791    6130 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/newest-cni-495000/config.json ...
	I0914 15:25:43.857178    6130 start.go:365] acquiring machines lock for newest-cni-495000: {Name:mk831f2bd5a9a0c932c8a59972319d68edb53d07 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 15:25:43.857207    6130 start.go:369] acquired machines lock for "newest-cni-495000" in 22.583µs
	I0914 15:25:43.857217    6130 start.go:96] Skipping create...Using existing machine configuration
	I0914 15:25:43.857222    6130 fix.go:54] fixHost starting: 
	I0914 15:25:43.857347    6130 fix.go:102] recreateIfNeeded on newest-cni-495000: state=Stopped err=<nil>
	W0914 15:25:43.857357    6130 fix.go:128] unexpected machine state, will restart: <nil>
	I0914 15:25:43.861637    6130 out.go:177] * Restarting existing qemu2 VM for "newest-cni-495000" ...
	I0914 15:25:43.869600    6130 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/newest-cni-495000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/newest-cni-495000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/newest-cni-495000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:05:c4:1b:86:e7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/newest-cni-495000/disk.qcow2
	I0914 15:25:43.871479    6130 main.go:141] libmachine: STDOUT: 
	I0914 15:25:43.871498    6130 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 15:25:43.871529    6130 fix.go:56] fixHost completed within 14.311833ms
	I0914 15:25:43.871535    6130 start.go:83] releasing machines lock for "newest-cni-495000", held for 14.3295ms
	W0914 15:25:43.871543    6130 start.go:688] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 15:25:43.871583    6130 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 15:25:43.871589    6130 start.go:703] Will try again in 5 seconds ...
	I0914 15:25:48.871967    6130 start.go:365] acquiring machines lock for newest-cni-495000: {Name:mk831f2bd5a9a0c932c8a59972319d68edb53d07 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 15:25:48.872325    6130 start.go:369] acquired machines lock for "newest-cni-495000" in 273.209µs
	I0914 15:25:48.872459    6130 start.go:96] Skipping create...Using existing machine configuration
	I0914 15:25:48.872481    6130 fix.go:54] fixHost starting: 
	I0914 15:25:48.873263    6130 fix.go:102] recreateIfNeeded on newest-cni-495000: state=Stopped err=<nil>
	W0914 15:25:48.873294    6130 fix.go:128] unexpected machine state, will restart: <nil>
	I0914 15:25:48.879743    6130 out.go:177] * Restarting existing qemu2 VM for "newest-cni-495000" ...
	I0914 15:25:48.885052    6130 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/newest-cni-495000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/newest-cni-495000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/newest-cni-495000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:05:c4:1b:86:e7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17243-1006/.minikube/machines/newest-cni-495000/disk.qcow2
	I0914 15:25:48.894108    6130 main.go:141] libmachine: STDOUT: 
	I0914 15:25:48.894171    6130 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0914 15:25:48.894246    6130 fix.go:56] fixHost completed within 21.772834ms
	I0914 15:25:48.894272    6130 start.go:83] releasing machines lock for "newest-cni-495000", held for 21.931916ms
	W0914 15:25:48.894510    6130 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-495000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-495000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0914 15:25:48.901739    6130 out.go:177] 
	W0914 15:25:48.904708    6130 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0914 15:25:48.904742    6130 out.go:239] * 
	* 
	W0914 15:25:48.907602    6130 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 15:25:48.915695    6130 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-495000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.28.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-495000 -n newest-cni-495000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-495000 -n newest-cni-495000: exit status 7 (66.728167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-495000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p newest-cni-495000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p newest-cni-495000 "sudo crictl images -o json": exit status 89 (44.289083ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p newest-cni-495000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p newest-cni-495000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p newest-cni-495000"
start_stop_delete_test.go:304: v1.28.1 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.9-0",
- 	"registry.k8s.io/kube-apiserver:v1.28.1",
- 	"registry.k8s.io/kube-controller-manager:v1.28.1",
- 	"registry.k8s.io/kube-proxy:v1.28.1",
- 	"registry.k8s.io/kube-scheduler:v1.28.1",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-495000 -n newest-cni-495000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-495000 -n newest-cni-495000: exit status 7 (29.615542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-495000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-495000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-495000 --alsologtostderr -v=1: exit status 89 (40.9925ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p newest-cni-495000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 15:25:49.098833    6144 out.go:296] Setting OutFile to fd 1 ...
	I0914 15:25:49.098995    6144 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 15:25:49.098998    6144 out.go:309] Setting ErrFile to fd 2...
	I0914 15:25:49.099001    6144 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 15:25:49.099138    6144 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17243-1006/.minikube/bin
	I0914 15:25:49.099373    6144 out.go:303] Setting JSON to false
	I0914 15:25:49.099382    6144 mustload.go:65] Loading cluster: newest-cni-495000
	I0914 15:25:49.099606    6144 config.go:182] Loaded profile config "newest-cni-495000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0914 15:25:49.103502    6144 out.go:177] * The control plane node must be running for this command
	I0914 15:25:49.107475    6144 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-495000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-495000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-495000 -n newest-cni-495000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-495000 -n newest-cni-495000: exit status 7 (29.398083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-495000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-495000 -n newest-cni-495000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-495000 -n newest-cni-495000: exit status 7 (30.162041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-495000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.10s)

                                                
                                    

Test pass (142/255)

Order passed test Duration
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.09
10 TestDownloadOnly/v1.28.1/json-events 23.99
11 TestDownloadOnly/v1.28.1/preload-exists 0
14 TestDownloadOnly/v1.28.1/kubectl 0
15 TestDownloadOnly/v1.28.1/LogsDuration 0.08
16 TestDownloadOnly/DeleteAll 0.24
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.23
19 TestBinaryMirror 0.37
22 TestAddons/Setup 403.95
27 TestAddons/parallel/MetricsServer 5.26
31 TestAddons/parallel/Headlamp 11.42
35 TestAddons/serial/GCPAuth/Namespaces 0.07
36 TestAddons/StoppedEnableDisable 12.27
44 TestHyperKitDriverInstallOrUpdate 7.84
47 TestErrorSpam/setup 29.71
48 TestErrorSpam/start 0.34
49 TestErrorSpam/status 0.27
50 TestErrorSpam/pause 0.64
51 TestErrorSpam/unpause 0.57
52 TestErrorSpam/stop 12.24
55 TestFunctional/serial/CopySyncFile 0
56 TestFunctional/serial/StartWithProxy 43.91
57 TestFunctional/serial/AuditLog 0
58 TestFunctional/serial/SoftStart 32.35
59 TestFunctional/serial/KubeContext 0.03
60 TestFunctional/serial/KubectlGetPods 0.04
63 TestFunctional/serial/CacheCmd/cache/add_remote 3.54
64 TestFunctional/serial/CacheCmd/cache/add_local 1.23
65 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.03
66 TestFunctional/serial/CacheCmd/cache/list 0.03
67 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.08
68 TestFunctional/serial/CacheCmd/cache/cache_reload 0.94
69 TestFunctional/serial/CacheCmd/cache/delete 0.07
70 TestFunctional/serial/MinikubeKubectlCmd 0.41
71 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.54
72 TestFunctional/serial/ExtraConfig 36.74
73 TestFunctional/serial/ComponentHealth 0.04
74 TestFunctional/serial/LogsCmd 0.62
75 TestFunctional/serial/LogsFileCmd 0.58
76 TestFunctional/serial/InvalidService 4.27
78 TestFunctional/parallel/ConfigCmd 0.21
79 TestFunctional/parallel/DashboardCmd 13.44
80 TestFunctional/parallel/DryRun 0.23
81 TestFunctional/parallel/InternationalLanguage 0.11
82 TestFunctional/parallel/StatusCmd 0.27
87 TestFunctional/parallel/AddonsCmd 0.12
90 TestFunctional/parallel/SSHCmd 0.15
91 TestFunctional/parallel/CpCmd 0.31
93 TestFunctional/parallel/FileSync 0.08
94 TestFunctional/parallel/CertSync 0.46
98 TestFunctional/parallel/NodeLabels 0.04
100 TestFunctional/parallel/NonActiveRuntimeDisabled 0.07
102 TestFunctional/parallel/License 0.53
104 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.23
105 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
107 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.12
108 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.04
109 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
110 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.02
111 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
112 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
113 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
114 TestFunctional/parallel/ServiceCmd/DeployApp 6.09
115 TestFunctional/parallel/ServiceCmd/List 0.29
116 TestFunctional/parallel/ServiceCmd/JSONOutput 0.29
117 TestFunctional/parallel/ServiceCmd/HTTPS 0.13
118 TestFunctional/parallel/ServiceCmd/Format 0.11
119 TestFunctional/parallel/ServiceCmd/URL 0.12
120 TestFunctional/parallel/ProfileCmd/profile_not_create 0.2
121 TestFunctional/parallel/ProfileCmd/profile_list 0.16
122 TestFunctional/parallel/ProfileCmd/profile_json_output 0.16
123 TestFunctional/parallel/MountCmd/any-port 5.14
124 TestFunctional/parallel/MountCmd/specific-port 1.11
125 TestFunctional/parallel/MountCmd/VerifyCleanup 0.93
126 TestFunctional/parallel/Version/short 0.04
127 TestFunctional/parallel/Version/components 0.18
128 TestFunctional/parallel/ImageCommands/ImageListShort 0.08
129 TestFunctional/parallel/ImageCommands/ImageListTable 0.08
130 TestFunctional/parallel/ImageCommands/ImageListJson 0.08
131 TestFunctional/parallel/ImageCommands/ImageListYaml 0.08
132 TestFunctional/parallel/ImageCommands/ImageBuild 2.07
133 TestFunctional/parallel/ImageCommands/Setup 2.06
134 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 2.06
135 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.47
136 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 3.84
137 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.53
138 TestFunctional/parallel/ImageCommands/ImageRemove 0.18
139 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.63
140 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.61
141 TestFunctional/parallel/DockerEnv/bash 0.38
142 TestFunctional/parallel/UpdateContextCmd/no_changes 0.05
143 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.05
144 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.05
145 TestFunctional/delete_addon-resizer_images 0.12
146 TestFunctional/delete_my-image_image 0.04
147 TestFunctional/delete_minikube_cached_images 0.04
151 TestImageBuild/serial/Setup 27.67
152 TestImageBuild/serial/NormalBuild 1.52
154 TestImageBuild/serial/BuildWithDockerIgnore 0.12
155 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.11
158 TestIngressAddonLegacy/StartLegacyK8sCluster 65.13
160 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 19.39
161 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.24
165 TestJSONOutput/start/Command 46.38
166 TestJSONOutput/start/Audit 0
168 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
169 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
171 TestJSONOutput/pause/Command 0.28
172 TestJSONOutput/pause/Audit 0
174 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
175 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
177 TestJSONOutput/unpause/Command 0.21
178 TestJSONOutput/unpause/Audit 0
180 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
181 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
183 TestJSONOutput/stop/Command 12.07
184 TestJSONOutput/stop/Audit 0
186 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
188 TestErrorJSONOutput 0.33
193 TestMainNoArgs 0.03
194 TestMinikubeProfile 65.79
250 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
254 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
255 TestNoKubernetes/serial/ProfileList 0.14
256 TestNoKubernetes/serial/Stop 0.06
258 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.04
276 TestStartStop/group/old-k8s-version/serial/Stop 0.1
277 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.1
281 TestStartStop/group/no-preload/serial/Stop 0.06
282 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.09
298 TestStartStop/group/embed-certs/serial/Stop 0.07
299 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.09
303 TestStartStop/group/default-k8s-diff-port/serial/Stop 0.06
304 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.09
316 TestStartStop/group/newest-cni/serial/DeployApp 0
317 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
318 TestStartStop/group/newest-cni/serial/Stop 0.06
319 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.09
321 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
322 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-917000
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-917000: exit status 85 (91.627792ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-917000 | jenkins | v1.31.2 | 14 Sep 23 14:35 PDT |          |
	|         | -p download-only-917000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/14 14:35:32
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.21.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0914 14:35:32.928760    1435 out.go:296] Setting OutFile to fd 1 ...
	I0914 14:35:32.928895    1435 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 14:35:32.928898    1435 out.go:309] Setting ErrFile to fd 2...
	I0914 14:35:32.928901    1435 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 14:35:32.929035    1435 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17243-1006/.minikube/bin
	W0914 14:35:32.929122    1435 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/17243-1006/.minikube/config/config.json: open /Users/jenkins/minikube-integration/17243-1006/.minikube/config/config.json: no such file or directory
	I0914 14:35:32.930265    1435 out.go:303] Setting JSON to true
	I0914 14:35:32.946630    1435 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":306,"bootTime":1694727026,"procs":399,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0914 14:35:32.946712    1435 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0914 14:35:32.952236    1435 out.go:97] [download-only-917000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0914 14:35:32.956218    1435 out.go:169] MINIKUBE_LOCATION=17243
	W0914 14:35:32.952390    1435 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/preloaded-tarball: no such file or directory
	I0914 14:35:32.952432    1435 notify.go:220] Checking for updates...
	I0914 14:35:32.963164    1435 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/17243-1006/kubeconfig
	I0914 14:35:32.966266    1435 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 14:35:32.969131    1435 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 14:35:32.972181    1435 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/17243-1006/.minikube
	W0914 14:35:32.978094    1435 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0914 14:35:32.978282    1435 driver.go:373] Setting default libvirt URI to qemu:///system
	I0914 14:35:32.983278    1435 out.go:97] Using the qemu2 driver based on user configuration
	I0914 14:35:32.983298    1435 start.go:298] selected driver: qemu2
	I0914 14:35:32.983301    1435 start.go:902] validating driver "qemu2" against <nil>
	I0914 14:35:32.983367    1435 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0914 14:35:32.986172    1435 out.go:169] Automatically selected the socket_vmnet network
	I0914 14:35:32.991532    1435 start_flags.go:384] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0914 14:35:32.991608    1435 start_flags.go:904] Wait components to verify : map[apiserver:true system_pods:true]
	I0914 14:35:32.991663    1435 cni.go:84] Creating CNI manager for ""
	I0914 14:35:32.991681    1435 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0914 14:35:32.991692    1435 start_flags.go:321] config:
	{Name:download-only-917000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-917000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRun
time:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 14:35:32.996903    1435 iso.go:125] acquiring lock: {Name:mkd5b68b252c18146a825ff5365883948b3c6983 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 14:35:33.001169    1435 out.go:97] Downloading VM boot image ...
	I0914 14:35:33.001186    1435 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/iso/arm64/minikube-v1.31.0-1694625400-17243-arm64.iso
	I0914 14:35:43.903863    1435 out.go:97] Starting control plane node download-only-917000 in cluster download-only-917000
	I0914 14:35:43.903888    1435 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0914 14:35:44.020559    1435 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0914 14:35:44.020573    1435 cache.go:57] Caching tarball of preloaded images
	I0914 14:35:44.020814    1435 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0914 14:35:44.025893    1435 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0914 14:35:44.025905    1435 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0914 14:35:44.234154    1435 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4?checksum=md5:a000baffb0664b293d602f95ed25caa6 -> /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0914 14:35:57.209980    1435 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0914 14:35:57.210100    1435 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0914 14:35:57.852789    1435 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0914 14:35:57.852982    1435 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/download-only-917000/config.json ...
	I0914 14:35:57.853001    1435 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/download-only-917000/config.json: {Name:mk282f6e537d7ce3cce445646d350fe24efa799f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 14:35:57.853243    1435 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0914 14:35:57.853407    1435 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 -> /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/darwin/arm64/v1.16.0/kubectl
	I0914 14:35:58.230896    1435 out.go:169] 
	W0914 14:35:58.234643    1435 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 Dst:/Users/jenkins/minikube-integration/17243-1006/.minikube/cache/darwin/arm64/v1.16.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x10861fb20 0x10861fb20 0x10861fb20 0x10861fb20 0x10861fb20 0x10861fb20 0x10861fb20] Decompressors:map[bz2:0x140005336f0 gz:0x140005336f8 tar:0x140005336a0 tar.bz2:0x140005336b0 tar.gz:0x140005336c0 tar.xz:0x140005336d0 tar.zst:0x140005336e0 tbz2:0x140005336b0 tgz:0x140005336c0 txz:0x140005336d0 tzst:0x140005336e0 xz:0x14000533700 zip:0x14000533710 zst:0x14000533708] Getters:map[file:0x14000062700 http:0x1400017e5a0 https:0x1400017e5f0] Dir:false ProgressListener
:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0914 14:35:58.234667    1435 out_reason.go:110] 
	W0914 14:35:58.241744    1435 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 14:35:58.245578    1435 out.go:169] 
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-917000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/json-events (23.99s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-917000 --force --alsologtostderr --kubernetes-version=v1.28.1 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-917000 --force --alsologtostderr --kubernetes-version=v1.28.1 --container-runtime=docker --driver=qemu2 : (23.993362041s)
--- PASS: TestDownloadOnly/v1.28.1/json-events (23.99s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/preload-exists
--- PASS: TestDownloadOnly/v1.28.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/kubectl
--- PASS: TestDownloadOnly/v1.28.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-917000
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-917000: exit status 85 (76.278167ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-917000 | jenkins | v1.31.2 | 14 Sep 23 14:35 PDT |          |
	|         | -p download-only-917000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-917000 | jenkins | v1.31.2 | 14 Sep 23 14:35 PDT |          |
	|         | -p download-only-917000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.1   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/14 14:35:58
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.21.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0914 14:35:58.431528    1452 out.go:296] Setting OutFile to fd 1 ...
	I0914 14:35:58.431659    1452 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 14:35:58.431662    1452 out.go:309] Setting ErrFile to fd 2...
	I0914 14:35:58.431664    1452 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 14:35:58.431785    1452 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17243-1006/.minikube/bin
	W0914 14:35:58.431853    1452 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/17243-1006/.minikube/config/config.json: open /Users/jenkins/minikube-integration/17243-1006/.minikube/config/config.json: no such file or directory
	I0914 14:35:58.432793    1452 out.go:303] Setting JSON to true
	I0914 14:35:58.447894    1452 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":332,"bootTime":1694727026,"procs":398,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0914 14:35:58.447964    1452 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0914 14:35:58.452257    1452 out.go:97] [download-only-917000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0914 14:35:58.456011    1452 out.go:169] MINIKUBE_LOCATION=17243
	I0914 14:35:58.452364    1452 notify.go:220] Checking for updates...
	I0914 14:35:58.463218    1452 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/17243-1006/kubeconfig
	I0914 14:35:58.464695    1452 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 14:35:58.468143    1452 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 14:35:58.471184    1452 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/17243-1006/.minikube
	W0914 14:35:58.477185    1452 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0914 14:35:58.477538    1452 config.go:182] Loaded profile config "download-only-917000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W0914 14:35:58.477580    1452 start.go:810] api.Load failed for download-only-917000: filestore "download-only-917000": Docker machine "download-only-917000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0914 14:35:58.477645    1452 driver.go:373] Setting default libvirt URI to qemu:///system
	W0914 14:35:58.477664    1452 start.go:810] api.Load failed for download-only-917000: filestore "download-only-917000": Docker machine "download-only-917000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0914 14:35:58.481138    1452 out.go:97] Using the qemu2 driver based on existing profile
	I0914 14:35:58.481146    1452 start.go:298] selected driver: qemu2
	I0914 14:35:58.481148    1452 start.go:902] validating driver "qemu2" against &{Name:download-only-917000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-917000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binar
yMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 14:35:58.483080    1452 cni.go:84] Creating CNI manager for ""
	I0914 14:35:58.483099    1452 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0914 14:35:58.483106    1452 start_flags.go:321] config:
	{Name:download-only-917000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:download-only-917000 Namespace:def
ault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath
: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 14:35:58.487066    1452 iso.go:125] acquiring lock: {Name:mkd5b68b252c18146a825ff5365883948b3c6983 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 14:35:58.490235    1452 out.go:97] Starting control plane node download-only-917000 in cluster download-only-917000
	I0914 14:35:58.490243    1452 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0914 14:35:58.708184    1452 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.1/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0914 14:35:58.708243    1452 cache.go:57] Caching tarball of preloaded images
	I0914 14:35:58.708939    1452 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0914 14:35:58.713961    1452 out.go:97] Downloading Kubernetes v1.28.1 preload ...
	I0914 14:35:58.713986    1452 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 ...
	I0914 14:35:58.938929    1452 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.1/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4?checksum=md5:014fa2c9750ed18a91c50dffb6ed7aeb -> /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0914 14:36:15.215754    1452 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 ...
	I0914 14:36:15.215904    1452 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 ...
	I0914 14:36:15.795718    1452 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0914 14:36:15.795792    1452 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/download-only-917000/config.json ...
	I0914 14:36:15.796030    1452 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0914 14:36:15.796189    1452 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.1/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.1/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/17243-1006/.minikube/cache/darwin/arm64/v1.28.1/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-917000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.1/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:187: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-917000
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.23s)

                                                
                                    
x
+
TestBinaryMirror (0.37s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-231000 --alsologtostderr --binary-mirror http://127.0.0.1:49379 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-231000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-231000
--- PASS: TestBinaryMirror (0.37s)

                                                
                                    
x
+
TestAddons/Setup (403.95s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:88: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-388000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:88: (dbg) Done: out/minikube-darwin-arm64 start -p addons-388000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=qemu2  --addons=ingress --addons=ingress-dns: (6m43.949472875s)
--- PASS: TestAddons/Setup (403.95s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.26s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:383: metrics-server stabilized in 1.868875ms
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-dtj78" [7b539063-f45b-4a15-97e7-6713ea57e519] Running
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.007085084s
addons_test.go:391: (dbg) Run:  kubectl --context addons-388000 top pods -n kube-system
addons_test.go:408: (dbg) Run:  out/minikube-darwin-arm64 -p addons-388000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.26s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (11.42s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:800: (dbg) Run:  out/minikube-darwin-arm64 addons enable headlamp -p addons-388000 --alsologtostderr -v=1
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-699c48fb74-9lhdj" [cf1b687b-3e12-4ae8-9585-ad3fd29b43ad] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-699c48fb74-9lhdj" [cf1b687b-3e12-4ae8-9585-ad3fd29b43ad] Running
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.00807575s
--- PASS: TestAddons/parallel/Headlamp (11.42s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.07s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:626: (dbg) Run:  kubectl --context addons-388000 create ns new-namespace
addons_test.go:640: (dbg) Run:  kubectl --context addons-388000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.07s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.27s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:148: (dbg) Run:  out/minikube-darwin-arm64 stop -p addons-388000
addons_test.go:148: (dbg) Done: out/minikube-darwin-arm64 stop -p addons-388000: (12.082137s)
addons_test.go:152: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-388000
addons_test.go:156: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-388000
addons_test.go:161: (dbg) Run:  out/minikube-darwin-arm64 addons disable gvisor -p addons-388000
--- PASS: TestAddons/StoppedEnableDisable (12.27s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (7.84s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (7.84s)

                                                
                                    
x
+
TestErrorSpam/setup (29.71s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-642000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-642000 --driver=qemu2 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -p nospam-642000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-642000 --driver=qemu2 : (29.709025667s)
--- PASS: TestErrorSpam/setup (29.71s)

                                                
                                    
x
+
TestErrorSpam/start (0.34s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-642000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-642000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-642000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-642000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-642000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-642000 start --dry-run
--- PASS: TestErrorSpam/start (0.34s)

                                                
                                    
x
+
TestErrorSpam/status (0.27s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-642000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-642000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-642000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-642000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-642000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-642000 status
--- PASS: TestErrorSpam/status (0.27s)

                                                
                                    
x
+
TestErrorSpam/pause (0.64s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-642000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-642000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-642000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-642000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-642000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-642000 pause
--- PASS: TestErrorSpam/pause (0.64s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.57s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-642000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-642000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-642000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-642000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-642000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-642000 unpause
--- PASS: TestErrorSpam/unpause (0.57s)

                                                
                                    
x
+
TestErrorSpam/stop (12.24s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-642000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-642000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-642000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-642000 stop: (12.077880083s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-642000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-642000 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-642000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-642000 stop
--- PASS: TestErrorSpam/stop (12.24s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/17243-1006/.minikube/files/etc/test/nested/copy/1425/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (43.91s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-398000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2230: (dbg) Done: out/minikube-darwin-arm64 start -p functional-398000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : (43.907602583s)
--- PASS: TestFunctional/serial/StartWithProxy (43.91s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (32.35s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-398000 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-darwin-arm64 start -p functional-398000 --alsologtostderr -v=8: (32.35220775s)
functional_test.go:659: soft start took 32.352621208s for "functional-398000" cluster.
--- PASS: TestFunctional/serial/SoftStart (32.35s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.03s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-398000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.54s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-398000 cache add registry.k8s.io/pause:3.1: (1.353084958s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-398000 cache add registry.k8s.io/pause:3.3: (1.148151625s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-398000 cache add registry.k8s.io/pause:latest: (1.041001542s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.54s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-398000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialCacheCmdcacheadd_local1388994352/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 cache add minikube-local-cache-test:functional-398000
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 cache delete minikube-local-cache-test:functional-398000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-398000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.94s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-398000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (77.333291ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (0.94s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.41s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 kubectl -- --context functional-398000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.41s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.54s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-398000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.54s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (36.74s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-398000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-darwin-arm64 start -p functional-398000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (36.742146708s)
functional_test.go:757: restart took 36.742274375s for "functional-398000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (36.74s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-398000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.62s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 logs
--- PASS: TestFunctional/serial/LogsCmd (0.62s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.58s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 logs --file /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialLogsFileCmd369916531/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.58s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.27s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-398000 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-darwin-arm64 service invalid-svc -p functional-398000
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-darwin-arm64 service invalid-svc -p functional-398000: exit status 115 (115.163958ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.105.4:30173 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-398000 delete -f testdata/invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-398000 delete -f testdata/invalidsvc.yaml: (1.033460541s)
--- PASS: TestFunctional/serial/InvalidService (4.27s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-398000 config get cpus: exit status 14 (28.555833ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-398000 config get cpus: exit status 14 (29.607375ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-398000 --alsologtostderr -v=1]
2023/09/14 15:07:43 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:906: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-398000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 3025: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (13.44s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-398000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-398000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (108.592334ms)

                                                
                                                
-- stdout --
	* [functional-398000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17243
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17243-1006/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17243-1006/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 15:07:30.075696    3012 out.go:296] Setting OutFile to fd 1 ...
	I0914 15:07:30.075832    3012 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 15:07:30.075836    3012 out.go:309] Setting ErrFile to fd 2...
	I0914 15:07:30.075839    3012 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 15:07:30.075994    3012 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17243-1006/.minikube/bin
	I0914 15:07:30.077183    3012 out.go:303] Setting JSON to false
	I0914 15:07:30.093220    3012 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2224,"bootTime":1694727026,"procs":412,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0914 15:07:30.093319    3012 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0914 15:07:30.097326    3012 out.go:177] * [functional-398000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0914 15:07:30.104196    3012 out.go:177]   - MINIKUBE_LOCATION=17243
	I0914 15:07:30.108309    3012 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17243-1006/kubeconfig
	I0914 15:07:30.104263    3012 notify.go:220] Checking for updates...
	I0914 15:07:30.111318    3012 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 15:07:30.112549    3012 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 15:07:30.115273    3012 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17243-1006/.minikube
	I0914 15:07:30.118302    3012 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 15:07:30.121585    3012 config.go:182] Loaded profile config "functional-398000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0914 15:07:30.121850    3012 driver.go:373] Setting default libvirt URI to qemu:///system
	I0914 15:07:30.126290    3012 out.go:177] * Using the qemu2 driver based on existing profile
	I0914 15:07:30.133340    3012 start.go:298] selected driver: qemu2
	I0914 15:07:30.133344    3012 start.go:902] validating driver "qemu2" against &{Name:functional-398000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.1 ClusterName:functional-398000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false E
xtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 15:07:30.133385    3012 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 15:07:30.139283    3012 out.go:177] 
	W0914 15:07:30.143342    3012 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0914 15:07:30.147271    3012 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-398000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-398000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-398000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (111.429417ms)

                                                
                                                
-- stdout --
	* [functional-398000] minikube v1.31.2 sur Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17243
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17243-1006/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17243-1006/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 15:07:29.958769    3008 out.go:296] Setting OutFile to fd 1 ...
	I0914 15:07:29.958887    3008 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 15:07:29.958890    3008 out.go:309] Setting ErrFile to fd 2...
	I0914 15:07:29.958892    3008 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 15:07:29.959017    3008 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17243-1006/.minikube/bin
	I0914 15:07:29.960354    3008 out.go:303] Setting JSON to false
	I0914 15:07:29.976428    3008 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2223,"bootTime":1694727026,"procs":412,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0914 15:07:29.976514    3008 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0914 15:07:29.982387    3008 out.go:177] * [functional-398000] minikube v1.31.2 sur Darwin 13.5.2 (arm64)
	I0914 15:07:29.989324    3008 out.go:177]   - MINIKUBE_LOCATION=17243
	I0914 15:07:29.993368    3008 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17243-1006/kubeconfig
	I0914 15:07:29.989412    3008 notify.go:220] Checking for updates...
	I0914 15:07:30.001299    3008 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0914 15:07:30.004338    3008 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 15:07:30.007287    3008 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17243-1006/.minikube
	I0914 15:07:30.010302    3008 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 15:07:30.013642    3008 config.go:182] Loaded profile config "functional-398000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0914 15:07:30.013884    3008 driver.go:373] Setting default libvirt URI to qemu:///system
	I0914 15:07:30.018282    3008 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0914 15:07:30.025334    3008 start.go:298] selected driver: qemu2
	I0914 15:07:30.025338    3008 start.go:902] validating driver "qemu2" against &{Name:functional-398000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.1 ClusterName:functional-398000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false E
xtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 15:07:30.025380    3008 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 15:07:30.031282    3008 out.go:177] 
	W0914 15:07:30.035358    3008 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0914 15:07:30.038218    3008 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 status
functional_test.go:856: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 ssh -n functional-398000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 cp functional-398000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd341327751/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 ssh -n functional-398000 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/1425/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 ssh "sudo cat /etc/test/nested/copy/1425/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/1425.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 ssh "sudo cat /etc/ssl/certs/1425.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/1425.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 ssh "sudo cat /usr/share/ca-certificates/1425.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/14252.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 ssh "sudo cat /etc/ssl/certs/14252.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/14252.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 ssh "sudo cat /usr/share/ca-certificates/14252.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-398000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-398000 ssh "sudo systemctl is-active crio": exit status 1 (73.973417ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-398000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-398000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-398000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-398000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 2849: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-398000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-398000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [3b4d8f18-d4cc-45f6-9574-a88f0cdb0809] Pending
helpers_test.go:344: "nginx-svc" [3b4d8f18-d4cc-45f6-9574-a88f0cdb0809] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [3b4d8f18-d4cc-45f6-9574-a88f0cdb0809] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.012349333s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.12s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-398000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.103.117.53 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-398000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1436: (dbg) Run:  kubectl --context functional-398000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-398000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-759d89bdcc-qb8dx" [b4b51dff-689e-491b-8fc5-59f7380892a7] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-759d89bdcc-qb8dx" [b4b51dff-689e-491b-8fc5-59f7380892a7] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.008610875s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 service list -o json
functional_test.go:1493: Took "291.241542ms" to run "out/minikube-darwin-arm64 -p functional-398000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.105.4:31076
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.105.4:31076
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1314: Took "121.948834ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1328: Took "32.98875ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1365: Took "123.395125ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1378: Took "33.749416ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (5.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-398000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2465197020/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1694729242483660000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2465197020/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1694729242483660000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2465197020/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1694729242483660000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2465197020/001/test-1694729242483660000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-398000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (74.818542ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 14 22:07 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 14 22:07 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 14 22:07 test-1694729242483660000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 ssh cat /mount-9p/test-1694729242483660000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-398000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [2588d4a1-8737-4387-91c7-c3a110d8021b] Pending
helpers_test.go:344: "busybox-mount" [2588d4a1-8737-4387-91c7-c3a110d8021b] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [2588d4a1-8737-4387-91c7-c3a110d8021b] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [2588d4a1-8737-4387-91c7-c3a110d8021b] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.012333208s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-398000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-398000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2465197020/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (5.14s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-398000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port991902803/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-398000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (78.426083ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-398000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port991902803/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-398000 ssh "sudo umount -f /mount-9p": exit status 1 (72.167417ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-398000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-398000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port991902803/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.11s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (0.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-398000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2688948725/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-398000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2688948725/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-398000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2688948725/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-398000 ssh "findmnt -T" /mount1: exit status 1 (78.007083ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-arm64 mount -p functional-398000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-398000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2688948725/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-398000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2688948725/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-398000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2688948725/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (0.93s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-398000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.1
registry.k8s.io/kube-proxy:v1.28.1
registry.k8s.io/kube-controller-manager:v1.28.1
registry.k8s.io/kube-apiserver:v1.28.1
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-398000
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-398000
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-398000 image ls --format short --alsologtostderr:
I0914 15:07:56.912784    3190 out.go:296] Setting OutFile to fd 1 ...
I0914 15:07:56.912917    3190 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0914 15:07:56.912920    3190 out.go:309] Setting ErrFile to fd 2...
I0914 15:07:56.912923    3190 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0914 15:07:56.913054    3190 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17243-1006/.minikube/bin
I0914 15:07:56.913481    3190 config.go:182] Loaded profile config "functional-398000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0914 15:07:56.913539    3190 config.go:182] Loaded profile config "functional-398000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0914 15:07:56.914330    3190 ssh_runner.go:195] Run: systemctl --version
I0914 15:07:56.914341    3190 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/functional-398000/id_rsa Username:docker}
I0914 15:07:56.952850    3190 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-398000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/library/nginx                     | alpine            | fa0c6bb795403 | 43.4MB |
| registry.k8s.io/kube-proxy                  | v1.28.1           | 812f5241df7fd | 68.3MB |
| registry.k8s.io/coredns/coredns             | v1.10.1           | 97e04611ad434 | 51.4MB |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| docker.io/library/minikube-local-cache-test | functional-398000 | 57732d566e177 | 30B    |
| registry.k8s.io/etcd                        | 3.5.9-0           | 9cdd6470f48c8 | 181MB  |
| registry.k8s.io/pause                       | 3.9               | 829e9de338bd5 | 514kB  |
| docker.io/kubernetesui/dashboard            | <none>            | 20b332c9a70d8 | 244MB  |
| registry.k8s.io/kube-apiserver              | v1.28.1           | b29fb62480892 | 119MB  |
| registry.k8s.io/kube-controller-manager     | v1.28.1           | 8b6e1980b7584 | 116MB  |
| registry.k8s.io/kube-scheduler              | v1.28.1           | b4a5a57e99492 | 57.8MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| docker.io/kubernetesui/metrics-scraper      | <none>            | a422e0e982356 | 42.3MB |
| gcr.io/google-containers/addon-resizer      | functional-398000 | ffd4cfbbe753e | 32.9MB |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-398000 image ls --format table --alsologtostderr:
I0914 15:07:57.159960    3196 out.go:296] Setting OutFile to fd 1 ...
I0914 15:07:57.160138    3196 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0914 15:07:57.160147    3196 out.go:309] Setting ErrFile to fd 2...
I0914 15:07:57.160150    3196 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0914 15:07:57.160327    3196 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17243-1006/.minikube/bin
I0914 15:07:57.161212    3196 config.go:182] Loaded profile config "functional-398000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0914 15:07:57.161270    3196 config.go:182] Loaded profile config "functional-398000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0914 15:07:57.162173    3196 ssh_runner.go:195] Run: systemctl --version
I0914 15:07:57.162183    3196 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/functional-398000/id_rsa Username:docker}
I0914 15:07:57.199673    3196 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-398000 image ls --format json --alsologtostderr:
[{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"fa0c6bb795403f8762e5cbf7b9f395aa036e7bd61c707485c1968b79bb3da9f1","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"43400000"},{"id":"97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"51400000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-398000"],"size":"32900000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"8b6e1980b7584ebf92
ee961322982c26a525c4e4e2181e037b8854697be71965","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.1"],"size":"116000000"},{"id":"b4a5a57e994924bffc4556da6c6c39d27ebaf593155983163d0b2367037bcb87","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.1"],"size":"57800000"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"42300000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"514000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests"
:[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"57732d566e177505528198c0af6b60307304c01c8e2d13a3f384745ed72934ed","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-398000"],"size":"30"},{"id":"b29fb62480892633ac479243b9841b88f9ae30865773fd76b97522541cd5633a","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.1"],"size":"119000000"},{"id":"812f5241df7fd64adb98d461bd6259a825a371fb3b2d5258752579380bc39c26","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.28.1"],"size":"68300000"},{"id":"9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"181000000"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"244000000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-398000 image ls --format json --alsologtostderr:
I0914 15:07:57.077562    3194 out.go:296] Setting OutFile to fd 1 ...
I0914 15:07:57.077691    3194 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0914 15:07:57.077694    3194 out.go:309] Setting ErrFile to fd 2...
I0914 15:07:57.077701    3194 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0914 15:07:57.077853    3194 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17243-1006/.minikube/bin
I0914 15:07:57.078351    3194 config.go:182] Loaded profile config "functional-398000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0914 15:07:57.078414    3194 config.go:182] Loaded profile config "functional-398000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0914 15:07:57.079246    3194 ssh_runner.go:195] Run: systemctl --version
I0914 15:07:57.079256    3194 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/functional-398000/id_rsa Username:docker}
I0914 15:07:57.116734    3194 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-398000 image ls --format yaml --alsologtostderr:
- id: 57732d566e177505528198c0af6b60307304c01c8e2d13a3f384745ed72934ed
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-398000
size: "30"
- id: 8b6e1980b7584ebf92ee961322982c26a525c4e4e2181e037b8854697be71965
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.1
size: "116000000"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "244000000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-398000
size: "32900000"
- id: b29fb62480892633ac479243b9841b88f9ae30865773fd76b97522541cd5633a
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.1
size: "119000000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: b4a5a57e994924bffc4556da6c6c39d27ebaf593155983163d0b2367037bcb87
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.1
size: "57800000"
- id: 9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "181000000"
- id: 97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "51400000"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "42300000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: fa0c6bb795403f8762e5cbf7b9f395aa036e7bd61c707485c1968b79bb3da9f1
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "43400000"
- id: 812f5241df7fd64adb98d461bd6259a825a371fb3b2d5258752579380bc39c26
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.28.1
size: "68300000"
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "514000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-398000 image ls --format yaml --alsologtostderr:
I0914 15:07:56.995996    3192 out.go:296] Setting OutFile to fd 1 ...
I0914 15:07:56.996147    3192 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0914 15:07:56.996150    3192 out.go:309] Setting ErrFile to fd 2...
I0914 15:07:56.996153    3192 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0914 15:07:56.996267    3192 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17243-1006/.minikube/bin
I0914 15:07:56.996690    3192 config.go:182] Loaded profile config "functional-398000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0914 15:07:56.996748    3192 config.go:182] Loaded profile config "functional-398000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0914 15:07:56.997647    3192 ssh_runner.go:195] Run: systemctl --version
I0914 15:07:56.997663    3192 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/functional-398000/id_rsa Username:docker}
I0914 15:07:57.035121    3192 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-398000 ssh pgrep buildkitd: exit status 1 (70.296375ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 image build -t localhost/my-image:functional-398000 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-darwin-arm64 -p functional-398000 image build -t localhost/my-image:functional-398000 testdata/build --alsologtostderr: (1.921890875s)
functional_test.go:319: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-398000 image build -t localhost/my-image:functional-398000 testdata/build --alsologtostderr:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
a01966dde7f8: Pulling fs layer
a01966dde7f8: Verifying Checksum
a01966dde7f8: Download complete
a01966dde7f8: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> 71a676dd070f
Step 2/3 : RUN true
---> Running in a7f8b8b1afc4
Removing intermediate container a7f8b8b1afc4
---> 7f8185a523b3
Step 3/3 : ADD content.txt /
---> 55929939c22a
Successfully built 55929939c22a
Successfully tagged localhost/my-image:functional-398000
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-398000 image build -t localhost/my-image:functional-398000 testdata/build --alsologtostderr:
I0914 15:07:57.311555    3200 out.go:296] Setting OutFile to fd 1 ...
I0914 15:07:57.311804    3200 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0914 15:07:57.311807    3200 out.go:309] Setting ErrFile to fd 2...
I0914 15:07:57.311809    3200 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0914 15:07:57.311943    3200 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17243-1006/.minikube/bin
I0914 15:07:57.312444    3200 config.go:182] Loaded profile config "functional-398000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0914 15:07:57.312840    3200 config.go:182] Loaded profile config "functional-398000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0914 15:07:57.313754    3200 ssh_runner.go:195] Run: systemctl --version
I0914 15:07:57.313764    3200 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17243-1006/.minikube/machines/functional-398000/id_rsa Username:docker}
I0914 15:07:57.350975    3200 build_images.go:151] Building image from path: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.3358411953.tar
I0914 15:07:57.351022    3200 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0914 15:07:57.353919    3200 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3358411953.tar
I0914 15:07:57.355227    3200 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3358411953.tar: stat -c "%s %y" /var/lib/minikube/build/build.3358411953.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3358411953.tar': No such file or directory
I0914 15:07:57.355239    3200 ssh_runner.go:362] scp /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.3358411953.tar --> /var/lib/minikube/build/build.3358411953.tar (3072 bytes)
I0914 15:07:57.362273    3200 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3358411953
I0914 15:07:57.364773    3200 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3358411953 -xf /var/lib/minikube/build/build.3358411953.tar
I0914 15:07:57.367712    3200 docker.go:339] Building image: /var/lib/minikube/build/build.3358411953
I0914 15:07:57.367749    3200 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-398000 /var/lib/minikube/build/build.3358411953
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
I0914 15:07:59.193840    3200 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-398000 /var/lib/minikube/build/build.3358411953: (1.826124125s)
I0914 15:07:59.193910    3200 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3358411953
I0914 15:07:59.196769    3200 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3358411953.tar
I0914 15:07:59.199680    3200 build_images.go:207] Built localhost/my-image:functional-398000 from /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.3358411953.tar
I0914 15:07:59.199695    3200 build_images.go:123] succeeded building to: functional-398000
I0914 15:07:59.199699    3200 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.99383125s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-398000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 image load --daemon gcr.io/google-containers/addon-resizer:functional-398000 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-darwin-arm64 -p functional-398000 image load --daemon gcr.io/google-containers/addon-resizer:functional-398000 --alsologtostderr: (1.98063325s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 image load --daemon gcr.io/google-containers/addon-resizer:functional-398000 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-darwin-arm64 -p functional-398000 image load --daemon gcr.io/google-containers/addon-resizer:functional-398000 --alsologtostderr: (1.393488083s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (3.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.884041291s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-398000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 image load --daemon gcr.io/google-containers/addon-resizer:functional-398000 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-darwin-arm64 -p functional-398000 image load --daemon gcr.io/google-containers/addon-resizer:functional-398000 --alsologtostderr: (1.829429375s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (3.84s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 image save gcr.io/google-containers/addon-resizer:functional-398000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 image rm gcr.io/google-containers/addon-resizer:functional-398000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-398000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 image save --daemon gcr.io/google-containers/addon-resizer:functional-398000 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-398000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-398000 docker-env) && out/minikube-darwin-arm64 status -p functional-398000"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-398000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 update-context --alsologtostderr -v=2
E0914 15:08:07.451181    1425 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/client.crt: no such file or directory
E0914 15:08:07.457612    1425 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/client.crt: no such file or directory
E0914 15:08:07.469656    1425 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/client.crt: no such file or directory
E0914 15:08:07.491702    1425 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/client.crt: no such file or directory
E0914 15:08:07.533819    1425 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/client.crt: no such file or directory
E0914 15:08:07.614035    1425 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/client.crt: no such file or directory
E0914 15:08:07.776086    1425 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/client.crt: no such file or directory
E0914 15:08:08.098178    1425 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/client.crt: no such file or directory
E0914 15:08:08.740290    1425 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/client.crt: no such file or directory
E0914 15:08:10.022385    1425 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/client.crt: no such file or directory
E0914 15:08:12.584458    1425 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/client.crt: no such file or directory
E0914 15:08:17.706454    1425 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/client.crt: no such file or directory
E0914 15:08:27.948343    1425 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/client.crt: no such file or directory
E0914 15:08:48.429979    1425 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/client.crt: no such file or directory
E0914 15:09:29.391135    1425 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-398000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.12s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-398000
--- PASS: TestFunctional/delete_addon-resizer_images (0.12s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-398000
--- PASS: TestFunctional/delete_my-image_image (0.04s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-398000
--- PASS: TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (27.67s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-717000 --driver=qemu2 
E0914 15:10:51.321447    1425 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/client.crt: no such file or directory
image_test.go:69: (dbg) Done: out/minikube-darwin-arm64 start -p image-717000 --driver=qemu2 : (27.665358125s)
--- PASS: TestImageBuild/serial/Setup (27.67s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.52s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-717000
image_test.go:78: (dbg) Done: out/minikube-darwin-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-717000: (1.519554292s)
--- PASS: TestImageBuild/serial/NormalBuild (1.52s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.12s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-717000
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.12s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.11s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-717000
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.11s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (65.13s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-darwin-arm64 start -p ingress-addon-legacy-438000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=qemu2 
E0914 15:11:32.784277    1425 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/functional-398000/client.crt: no such file or directory
E0914 15:11:32.789482    1425 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/functional-398000/client.crt: no such file or directory
E0914 15:11:32.799548    1425 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/functional-398000/client.crt: no such file or directory
E0914 15:11:32.821663    1425 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/functional-398000/client.crt: no such file or directory
E0914 15:11:32.863755    1425 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/functional-398000/client.crt: no such file or directory
E0914 15:11:32.945861    1425 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/functional-398000/client.crt: no such file or directory
E0914 15:11:33.107965    1425 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/functional-398000/client.crt: no such file or directory
E0914 15:11:33.430044    1425 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/functional-398000/client.crt: no such file or directory
E0914 15:11:34.071710    1425 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/functional-398000/client.crt: no such file or directory
E0914 15:11:35.353843    1425 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/functional-398000/client.crt: no such file or directory
E0914 15:11:37.916138    1425 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/functional-398000/client.crt: no such file or directory
E0914 15:11:43.038515    1425 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/functional-398000/client.crt: no such file or directory
E0914 15:11:53.278642    1425 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/functional-398000/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-darwin-arm64 start -p ingress-addon-legacy-438000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=qemu2 : (1m5.127056625s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (65.13s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (19.39s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-438000 addons enable ingress --alsologtostderr -v=5
E0914 15:12:13.760818    1425 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/functional-398000/client.crt: no such file or directory
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-darwin-arm64 -p ingress-addon-legacy-438000 addons enable ingress --alsologtostderr -v=5: (19.39286475s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (19.39s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.24s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-438000 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.24s)

                                                
                                    
x
+
TestJSONOutput/start/Command (46.38s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-288000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
E0914 15:13:35.161817    1425 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/addons-388000/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 start -p json-output-288000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : (46.382974375s)
--- PASS: TestJSONOutput/start/Command (46.38s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.28s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-288000 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.28s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.21s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-288000 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.21s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (12.07s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-288000 --output=json --user=testUser
E0914 15:14:16.642852    1425 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17243-1006/.minikube/profiles/functional-398000/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-288000 --output=json --user=testUser: (12.0688135s)
--- PASS: TestJSONOutput/stop/Command (12.07s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.33s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-817000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-817000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (90.931791ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"3bebbbe6-3a87-4894-90e7-4d558186f0b3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-817000] minikube v1.31.2 on Darwin 13.5.2 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"e311ed95-ad40-4fce-981f-175b1321aeb4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17243"}}
	{"specversion":"1.0","id":"ff678f60-dd77-45f9-aaa9-668b452e89f4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/17243-1006/kubeconfig"}}
	{"specversion":"1.0","id":"aeaf11ea-1e75-427c-8f82-289edb6d40e1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"fb7ae652-1c06-46ef-b967-95f22191800b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"e086a4a9-543c-4e14-83b3-7f651469d515","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/17243-1006/.minikube"}}
	{"specversion":"1.0","id":"b0944a80-cf3b-4234-9d86-3162854e1ff8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"10df0b2f-a1a7-4d29-ac03-b41e65c0c217","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-817000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-817000
--- PASS: TestErrorJSONOutput (0.33s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestMinikubeProfile (65.79s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-658000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-arm64 start -p first-658000 --driver=qemu2 : (29.814467875s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p second-659000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-arm64 start -p second-659000 --driver=qemu2 : (35.1869215s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 profile first-658000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 profile second-659000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-659000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-659000
helpers_test.go:175: Cleaning up "first-658000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-658000
--- PASS: TestMinikubeProfile (65.79s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-345000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-345000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (98.559584ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-345000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17243
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17243-1006/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17243-1006/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-345000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-345000 "sudo systemctl is-active --quiet service kubelet": exit status 89 (44.438667ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p NoKubernetes-345000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.14s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-345000
--- PASS: TestNoKubernetes/serial/Stop (0.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-345000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-345000 "sudo systemctl is-active --quiet service kubelet": exit status 89 (44.335916ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p NoKubernetes-345000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-018000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (0.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-018000 -n old-k8s-version-018000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-018000 -n old-k8s-version-018000: exit status 7 (32.557583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-018000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-399000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/no-preload/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-399000 -n no-preload-399000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-399000 -n no-preload-399000: exit status 7 (28.587125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-399000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-546000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/embed-certs/serial/Stop (0.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-546000 -n embed-certs-546000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-546000 -n embed-certs-546000: exit status 7 (28.506375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-546000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-850000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-850000 -n default-k8s-diff-port-850000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-850000 -n default-k8s-diff-port-850000: exit status 7 (29.502875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-850000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-495000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-495000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/newest-cni/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-495000 -n newest-cni-495000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-495000 -n newest-cni-495000: exit status 7 (29.941ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-495000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (22/255)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:210: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:420: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:474: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:297: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-710000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-710000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-710000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-710000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-710000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-710000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-710000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-710000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-710000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-710000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-710000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-710000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-710000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-710000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-710000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-710000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-710000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-710000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-710000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-710000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-710000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-710000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-710000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-710000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-710000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-710000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-710000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-710000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-710000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-710000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-710000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-710000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-710000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-710000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-710000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-710000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-710000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-710000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-710000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-710000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-710000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-710000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-710000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-710000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-710000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-710000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-710000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-710000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-710000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-710000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-710000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-710000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-710000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-710000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-710000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-710000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-710000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-710000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-710000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-710000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-710000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-710000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-710000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-710000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-710000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-710000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-710000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-710000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-710000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-710000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-710000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-710000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-710000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-710000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-710000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-710000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-710000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-710000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-710000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-710000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-710000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-710000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-710000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-710000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-710000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-710000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-710000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-710000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-710000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-710000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-710000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-710000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-710000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-710000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-710000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-710000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-710000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-710000"

                                                
                                                
----------------------- debugLogs end: cilium-710000 [took: 2.280802625s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-710000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-710000
--- SKIP: TestNetworkPlugins/group/cilium (2.51s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-014000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-014000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.24s)

                                                
                                    
Copied to clipboard