Test Report: QEMU_macOS 17102

                    
                      38d5550e53f52b04c4b197c514428c4ecd9b2e1a:2023-08-21:30667
                    
                

Test fail (94/261)

Order failed test Duration
3 TestDownloadOnly/v1.16.0/json-events 13.89
7 TestDownloadOnly/v1.16.0/kubectl 0
27 TestOffline 9.92
31 TestAddons/parallel/Registry 720.95
32 TestAddons/parallel/Ingress 136.82
33 TestAddons/parallel/InspektorGadget 480.94
34 TestAddons/parallel/MetricsServer 720.9
37 TestAddons/parallel/CSI 545.96
39 TestAddons/parallel/CloudSpanner 832.89
40 TestAddons/serial 0
41 TestAddons/StoppedEnableDisable 0
42 TestCertOptions 10.08
43 TestCertExpiration 195.25
44 TestDockerFlags 9.94
45 TestForceSystemdFlag 10.51
46 TestForceSystemdEnv 9.97
91 TestFunctional/parallel/ServiceCmdConnect 32.16
129 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.17
158 TestImageBuild/serial/BuildWithBuildArg 1.07
167 TestIngressAddonLegacy/serial/ValidateIngressAddons 52.27
202 TestMountStart/serial/StartWithMountFirst 10.34
205 TestMultiNode/serial/FreshStart2Nodes 9.81
206 TestMultiNode/serial/DeployApp2Nodes 84.36
207 TestMultiNode/serial/PingHostFrom2Pods 0.08
208 TestMultiNode/serial/AddNode 0.07
209 TestMultiNode/serial/ProfileList 0.16
210 TestMultiNode/serial/CopyFile 0.06
211 TestMultiNode/serial/StopNode 0.13
212 TestMultiNode/serial/StartAfterStop 0.1
213 TestMultiNode/serial/RestartKeepsNodes 5.36
214 TestMultiNode/serial/DeleteNode 0.09
215 TestMultiNode/serial/StopMultiNode 0.14
216 TestMultiNode/serial/RestartMultiNode 5.25
217 TestMultiNode/serial/ValidateNameConflict 20.1
221 TestPreload 9.93
223 TestScheduledStopUnix 9.97
224 TestSkaffold 12.11
227 TestRunningBinaryUpgrade 158.47
229 TestKubernetesUpgrade 15.39
242 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 1.79
243 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.27
244 TestStoppedBinaryUpgrade/Setup 167.53
246 TestPause/serial/Start 9.82
256 TestNoKubernetes/serial/StartWithK8s 10.12
257 TestNoKubernetes/serial/StartWithStopK8s 5.31
258 TestNoKubernetes/serial/Start 5.32
262 TestNoKubernetes/serial/StartNoArgs 5.31
264 TestNetworkPlugins/group/auto/Start 9.74
265 TestNetworkPlugins/group/kindnet/Start 9.85
266 TestNetworkPlugins/group/calico/Start 9.72
267 TestNetworkPlugins/group/custom-flannel/Start 9.86
268 TestNetworkPlugins/group/false/Start 9.66
269 TestNetworkPlugins/group/enable-default-cni/Start 9.77
270 TestNetworkPlugins/group/flannel/Start 9.79
271 TestNetworkPlugins/group/bridge/Start 9.82
272 TestNetworkPlugins/group/kubenet/Start 9.71
274 TestStartStop/group/old-k8s-version/serial/FirstStart 9.81
275 TestStartStop/group/old-k8s-version/serial/DeployApp 0.09
276 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.11
279 TestStartStop/group/old-k8s-version/serial/SecondStart 5.26
280 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
281 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.05
282 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
283 TestStartStop/group/old-k8s-version/serial/Pause 0.1
285 TestStartStop/group/no-preload/serial/FirstStart 9.83
286 TestStoppedBinaryUpgrade/Upgrade 3.15
287 TestStoppedBinaryUpgrade/MinikubeLogs 0.14
289 TestStartStop/group/embed-certs/serial/FirstStart 9.82
290 TestStartStop/group/no-preload/serial/DeployApp 0.09
291 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.11
294 TestStartStop/group/no-preload/serial/SecondStart 5.2
295 TestStartStop/group/embed-certs/serial/DeployApp 0.08
296 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.11
299 TestStartStop/group/embed-certs/serial/SecondStart 5.29
300 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
301 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.05
302 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.07
303 TestStartStop/group/no-preload/serial/Pause 0.1
305 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 10.06
306 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
307 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.05
308 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.07
309 TestStartStop/group/embed-certs/serial/Pause 0.1
311 TestStartStop/group/newest-cni/serial/FirstStart 9.85
312 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.08
313 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.11
316 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 5.25
321 TestStartStop/group/newest-cni/serial/SecondStart 5.24
322 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
323 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.05
324 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
325 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.09
328 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.08
329 TestStartStop/group/newest-cni/serial/Pause 0.1
x
+
TestDownloadOnly/v1.16.0/json-events (13.89s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-670000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-670000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=qemu2 : exit status 40 (13.887252167s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a7ba67a4-b7db-400a-9889-345c683335db","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-670000] minikube v1.31.2 on Darwin 13.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"3e37504b-3e14-4b13-89b3-fe39eee18107","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17102"}}
	{"specversion":"1.0","id":"40b05212-5640-4b80-80b8-0c69efd14d88","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/17102-920/kubeconfig"}}
	{"specversion":"1.0","id":"a8ab8bee-493d-4b18-a208-c537a325c958","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"d658de32-0d2b-4295-8784-93234a90e69c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"a5a48abf-0956-42dc-8acd-93fb950583e9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/17102-920/.minikube"}}
	{"specversion":"1.0","id":"ca187989-3d91-4aa1-b091-7731e76b14de","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"a6fdc6f3-30b6-4bf5-97f0-308a783bb845","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"bcbd53ce-5cb8-4365-bca8-5d52c239ab53","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"a83e3910-dacd-402b-81d7-290ec8459973","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"6d5e06e6-44b4-4333-be04-b25cd9a7ef5b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"1f80c210-bb6f-41af-a60d-6ac411c868f1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node download-only-670000 in cluster download-only-670000","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"d8c3d2d8-1920-4626-aebc-acf57e976207","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.16.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"f20ea5f5-0bc3-4bed-a0e1-9a7c47126209","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 Dst:/Users/jenkins/minikube-integration/17102-920/.minikube/cache/darwin/arm64/v1.16.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1046845b8 0x1046845b8 0x1046845b8 0x1046845b8 0x1046845b8 0x1046845b8 0x1046845b8] Decompressors:map[bz2:0x1400058de18 gz:0x1400058de70 tar:0x1400058de20 tar.bz2:0x1400058de30 tar.gz:0x1400058de40 tar.xz:0x1400058de50 tar.zst:0x1400058de60 tbz2:0x1400058de30 tgz:0x1400058
de40 txz:0x1400058de50 tzst:0x1400058de60 xz:0x1400058de78 zip:0x1400058de80 zst:0x1400058de90] Getters:map[file:0x14000f4c600 http:0x14000144460 https:0x14000144500] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"b98e228a-c3da-4e68-8c58-43fb3916da57","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0821 03:33:15.084599    1364 out.go:296] Setting OutFile to fd 1 ...
	I0821 03:33:15.084734    1364 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 03:33:15.084737    1364 out.go:309] Setting ErrFile to fd 2...
	I0821 03:33:15.084739    1364 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 03:33:15.084854    1364 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17102-920/.minikube/bin
	W0821 03:33:15.084911    1364 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/17102-920/.minikube/config/config.json: open /Users/jenkins/minikube-integration/17102-920/.minikube/config/config.json: no such file or directory
	I0821 03:33:15.085985    1364 out.go:303] Setting JSON to true
	I0821 03:33:15.102645    1364 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":169,"bootTime":1692613826,"procs":399,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0821 03:33:15.102723    1364 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0821 03:33:15.109779    1364 out.go:97] [download-only-670000] minikube v1.31.2 on Darwin 13.5 (arm64)
	I0821 03:33:15.113932    1364 out.go:169] MINIKUBE_LOCATION=17102
	W0821 03:33:15.109940    1364 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/17102-920/.minikube/cache/preloaded-tarball: no such file or directory
	I0821 03:33:15.109949    1364 notify.go:220] Checking for updates...
	I0821 03:33:15.122864    1364 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/17102-920/kubeconfig
	I0821 03:33:15.126942    1364 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0821 03:33:15.128266    1364 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0821 03:33:15.130953    1364 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/17102-920/.minikube
	W0821 03:33:15.136890    1364 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0821 03:33:15.137064    1364 driver.go:373] Setting default libvirt URI to qemu:///system
	I0821 03:33:15.141874    1364 out.go:97] Using the qemu2 driver based on user configuration
	I0821 03:33:15.141883    1364 start.go:298] selected driver: qemu2
	I0821 03:33:15.141885    1364 start.go:902] validating driver "qemu2" against <nil>
	I0821 03:33:15.141949    1364 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0821 03:33:15.145922    1364 out.go:169] Automatically selected the socket_vmnet network
	I0821 03:33:15.152485    1364 start_flags.go:382] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0821 03:33:15.152630    1364 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0821 03:33:15.152687    1364 cni.go:84] Creating CNI manager for ""
	I0821 03:33:15.152703    1364 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0821 03:33:15.152709    1364 start_flags.go:319] config:
	{Name:download-only-670000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-670000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: N
etworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0821 03:33:15.158251    1364 iso.go:125] acquiring lock: {Name:mk813ea611542195bb0511881888be3fabc72ff6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 03:33:15.161924    1364 out.go:97] Downloading VM boot image ...
	I0821 03:33:15.161950    1364 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/17102-920/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso
	E0821 03:33:15.323126    1364 iso.go:90] Unable to download https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso: getter: &{Ctx:context.Background Src:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso.sha256 Dst:/Users/jenkins/minikube-integration/17102-920/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso.download Pwd: Mode:2 Umask:---------- Detectors:[0x1046845b8 0x1046845b8 0x1046845b8 0x1046845b8 0x1046845b8 0x1046845b8 0x1046845b8] Decompressors:map[bz2:0x1400058de18 gz:0x1400058de70 tar:0x1400058de20 tar.bz2:0x1400058de30 tar.gz:0x1400058de40 tar.xz:0x1400058de50 tar.zst:0x1400058de60 tbz2:0x1400058de30 tgz:0x1400058de40 txz:0x1400058de50 tzst:0x1400058de60 xz:0x1400058de78 zip:0x1400058de80 zst:0x1400058de90] Getters:map[file:0x14000ff1c30 http:0x14000dcd8b0 https:0x14000dcd900] Dir:false ProgressListener:<nil> Insecure:false Dis
ableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	I0821 03:33:15.323189    1364 iso.go:125] acquiring lock: {Name:mk813ea611542195bb0511881888be3fabc72ff6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 03:33:15.328691    1364 out.go:97] Downloading VM boot image ...
	I0821 03:33:15.328780    1364 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.31.0/minikube-v1.31.0-arm64.iso?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.31.0/minikube-v1.31.0-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/17102-920/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso
	I0821 03:33:22.835102    1364 out.go:97] Starting control plane node download-only-670000 in cluster download-only-670000
	I0821 03:33:22.835130    1364 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0821 03:33:22.892327    1364 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0821 03:33:22.892399    1364 cache.go:57] Caching tarball of preloaded images
	I0821 03:33:22.892582    1364 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0821 03:33:22.897647    1364 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0821 03:33:22.897654    1364 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0821 03:33:22.975485    1364 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4?checksum=md5:a000baffb0664b293d602f95ed25caa6 -> /Users/jenkins/minikube-integration/17102-920/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0821 03:33:27.948828    1364 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0821 03:33:27.948974    1364 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17102-920/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0821 03:33:28.589788    1364 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0821 03:33:28.589982    1364 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/download-only-670000/config.json ...
	I0821 03:33:28.590000    1364 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/download-only-670000/config.json: {Name:mk3f18ac86e426c28be79e36d4316c065cb7c923 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 03:33:28.590247    1364 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0821 03:33:28.590424    1364 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 -> /Users/jenkins/minikube-integration/17102-920/.minikube/cache/darwin/arm64/v1.16.0/kubectl
	I0821 03:33:28.905100    1364 out.go:169] 
	W0821 03:33:28.909303    1364 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 Dst:/Users/jenkins/minikube-integration/17102-920/.minikube/cache/darwin/arm64/v1.16.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1046845b8 0x1046845b8 0x1046845b8 0x1046845b8 0x1046845b8 0x1046845b8 0x1046845b8] Decompressors:map[bz2:0x1400058de18 gz:0x1400058de70 tar:0x1400058de20 tar.bz2:0x1400058de30 tar.gz:0x1400058de40 tar.xz:0x1400058de50 tar.zst:0x1400058de60 tbz2:0x1400058de30 tgz:0x1400058de40 txz:0x1400058de50 tzst:0x1400058de60 xz:0x1400058de78 zip:0x1400058de80 zst:0x1400058de90] Getters:map[file:0x14000f4c600 http:0x14000144460 https:0x14000144500] Dir:false ProgressListener:
<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0821 03:33:28.909332    1364 out_reason.go:110] 
	W0821 03:33:28.916086    1364 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0821 03:33:28.919157    1364 out.go:169] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:71: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-670000" "--force" "--alsologtostderr" "--kubernetes-version=v1.16.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.16.0/json-events (13.89s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:160: expected the file for binary exist at "/Users/jenkins/minikube-integration/17102-920/.minikube/cache/darwin/arm64/v1.16.0/kubectl" but got error stat /Users/jenkins/minikube-integration/17102-920/.minikube/cache/darwin/arm64/v1.16.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (9.92s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-481000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-481000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.747729042s)

                                                
                                                
-- stdout --
	* [offline-docker-481000] minikube v1.31.2 on Darwin 13.5 (arm64)
	  - MINIKUBE_LOCATION=17102
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17102-920/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17102-920/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node offline-docker-481000 in cluster offline-docker-481000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-481000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0821 04:25:44.326131    4122 out.go:296] Setting OutFile to fd 1 ...
	I0821 04:25:44.326258    4122 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 04:25:44.326261    4122 out.go:309] Setting ErrFile to fd 2...
	I0821 04:25:44.326263    4122 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 04:25:44.326384    4122 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17102-920/.minikube/bin
	I0821 04:25:44.327550    4122 out.go:303] Setting JSON to false
	I0821 04:25:44.344037    4122 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3318,"bootTime":1692613826,"procs":420,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0821 04:25:44.344103    4122 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0821 04:25:44.347633    4122 out.go:177] * [offline-docker-481000] minikube v1.31.2 on Darwin 13.5 (arm64)
	I0821 04:25:44.354620    4122 out.go:177]   - MINIKUBE_LOCATION=17102
	I0821 04:25:44.354755    4122 notify.go:220] Checking for updates...
	I0821 04:25:44.361503    4122 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17102-920/kubeconfig
	I0821 04:25:44.364548    4122 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0821 04:25:44.367609    4122 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0821 04:25:44.370622    4122 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17102-920/.minikube
	I0821 04:25:44.373547    4122 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0821 04:25:44.376874    4122 config.go:182] Loaded profile config "multinode-806000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0821 04:25:44.376919    4122 driver.go:373] Setting default libvirt URI to qemu:///system
	I0821 04:25:44.380401    4122 out.go:177] * Using the qemu2 driver based on user configuration
	I0821 04:25:44.387497    4122 start.go:298] selected driver: qemu2
	I0821 04:25:44.387504    4122 start.go:902] validating driver "qemu2" against <nil>
	I0821 04:25:44.387512    4122 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0821 04:25:44.389446    4122 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0821 04:25:44.392540    4122 out.go:177] * Automatically selected the socket_vmnet network
	I0821 04:25:44.395658    4122 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0821 04:25:44.395679    4122 cni.go:84] Creating CNI manager for ""
	I0821 04:25:44.395687    4122 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0821 04:25:44.395691    4122 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0821 04:25:44.395697    4122 start_flags.go:319] config:
	{Name:offline-docker-481000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:offline-docker-481000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket:
NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0821 04:25:44.400024    4122 iso.go:125] acquiring lock: {Name:mk813ea611542195bb0511881888be3fabc72ff6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 04:25:44.403622    4122 out.go:177] * Starting control plane node offline-docker-481000 in cluster offline-docker-481000
	I0821 04:25:44.411532    4122 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime docker
	I0821 04:25:44.411564    4122 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17102-920/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4
	I0821 04:25:44.411575    4122 cache.go:57] Caching tarball of preloaded images
	I0821 04:25:44.411644    4122 preload.go:174] Found /Users/jenkins/minikube-integration/17102-920/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0821 04:25:44.411650    4122 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.4 on docker
	I0821 04:25:44.411723    4122 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/offline-docker-481000/config.json ...
	I0821 04:25:44.411735    4122 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/offline-docker-481000/config.json: {Name:mk6549c641c42025614fac226d2f9742674d0887 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 04:25:44.411977    4122 start.go:365] acquiring machines lock for offline-docker-481000: {Name:mk9b32d9fe994be32d77812db464b2cfa7bfb400 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0821 04:25:44.412017    4122 start.go:369] acquired machines lock for "offline-docker-481000" in 29.25µs
	I0821 04:25:44.412029    4122 start.go:93] Provisioning new machine with config: &{Name:offline-docker-481000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 Clus
terName:offline-docker-481000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0821 04:25:44.412069    4122 start.go:125] createHost starting for "" (driver="qemu2")
	I0821 04:25:44.420486    4122 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0821 04:25:44.434545    4122 start.go:159] libmachine.API.Create for "offline-docker-481000" (driver="qemu2")
	I0821 04:25:44.434569    4122 client.go:168] LocalClient.Create starting
	I0821 04:25:44.434636    4122 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca.pem
	I0821 04:25:44.434663    4122 main.go:141] libmachine: Decoding PEM data...
	I0821 04:25:44.434674    4122 main.go:141] libmachine: Parsing certificate...
	I0821 04:25:44.434717    4122 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17102-920/.minikube/certs/cert.pem
	I0821 04:25:44.434735    4122 main.go:141] libmachine: Decoding PEM data...
	I0821 04:25:44.434743    4122 main.go:141] libmachine: Parsing certificate...
	I0821 04:25:44.435095    4122 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17102-920/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17102-920/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0821 04:25:44.559150    4122 main.go:141] libmachine: Creating SSH key...
	I0821 04:25:44.640412    4122 main.go:141] libmachine: Creating Disk image...
	I0821 04:25:44.640421    4122 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0821 04:25:44.640590    4122 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17102-920/.minikube/machines/offline-docker-481000/disk.qcow2.raw /Users/jenkins/minikube-integration/17102-920/.minikube/machines/offline-docker-481000/disk.qcow2
	I0821 04:25:44.649935    4122 main.go:141] libmachine: STDOUT: 
	I0821 04:25:44.649955    4122 main.go:141] libmachine: STDERR: 
	I0821 04:25:44.650028    4122 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17102-920/.minikube/machines/offline-docker-481000/disk.qcow2 +20000M
	I0821 04:25:44.657916    4122 main.go:141] libmachine: STDOUT: Image resized.
	
	I0821 04:25:44.657932    4122 main.go:141] libmachine: STDERR: 
	I0821 04:25:44.657959    4122 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17102-920/.minikube/machines/offline-docker-481000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17102-920/.minikube/machines/offline-docker-481000/disk.qcow2
	I0821 04:25:44.657976    4122 main.go:141] libmachine: Starting QEMU VM...
	I0821 04:25:44.658019    4122 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17102-920/.minikube/machines/offline-docker-481000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/offline-docker-481000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17102-920/.minikube/machines/offline-docker-481000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:af:fb:0e:d9:e2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17102-920/.minikube/machines/offline-docker-481000/disk.qcow2
	I0821 04:25:44.659589    4122 main.go:141] libmachine: STDOUT: 
	I0821 04:25:44.659602    4122 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0821 04:25:44.659626    4122 client.go:171] LocalClient.Create took 225.055458ms
	I0821 04:25:46.661740    4122 start.go:128] duration metric: createHost completed in 2.249708417s
	I0821 04:25:46.661758    4122 start.go:83] releasing machines lock for "offline-docker-481000", held for 2.249794959s
	W0821 04:25:46.661780    4122 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0821 04:25:46.672469    4122 out.go:177] * Deleting "offline-docker-481000" in qemu2 ...
	W0821 04:25:46.680771    4122 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0821 04:25:46.680785    4122 start.go:687] Will try again in 5 seconds ...
	I0821 04:25:51.682941    4122 start.go:365] acquiring machines lock for offline-docker-481000: {Name:mk9b32d9fe994be32d77812db464b2cfa7bfb400 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0821 04:25:51.683398    4122 start.go:369] acquired machines lock for "offline-docker-481000" in 347.125µs
	I0821 04:25:51.683541    4122 start.go:93] Provisioning new machine with config: &{Name:offline-docker-481000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 Clus
terName:offline-docker-481000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0821 04:25:51.683799    4122 start.go:125] createHost starting for "" (driver="qemu2")
	I0821 04:25:51.693425    4122 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0821 04:25:51.740968    4122 start.go:159] libmachine.API.Create for "offline-docker-481000" (driver="qemu2")
	I0821 04:25:51.741021    4122 client.go:168] LocalClient.Create starting
	I0821 04:25:51.741122    4122 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca.pem
	I0821 04:25:51.741184    4122 main.go:141] libmachine: Decoding PEM data...
	I0821 04:25:51.741203    4122 main.go:141] libmachine: Parsing certificate...
	I0821 04:25:51.741271    4122 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17102-920/.minikube/certs/cert.pem
	I0821 04:25:51.741306    4122 main.go:141] libmachine: Decoding PEM data...
	I0821 04:25:51.741321    4122 main.go:141] libmachine: Parsing certificate...
	I0821 04:25:51.741815    4122 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17102-920/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17102-920/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0821 04:25:51.872507    4122 main.go:141] libmachine: Creating SSH key...
	I0821 04:25:51.987929    4122 main.go:141] libmachine: Creating Disk image...
	I0821 04:25:51.987935    4122 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0821 04:25:51.988088    4122 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17102-920/.minikube/machines/offline-docker-481000/disk.qcow2.raw /Users/jenkins/minikube-integration/17102-920/.minikube/machines/offline-docker-481000/disk.qcow2
	I0821 04:25:51.996936    4122 main.go:141] libmachine: STDOUT: 
	I0821 04:25:51.996951    4122 main.go:141] libmachine: STDERR: 
	I0821 04:25:51.997017    4122 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17102-920/.minikube/machines/offline-docker-481000/disk.qcow2 +20000M
	I0821 04:25:52.004327    4122 main.go:141] libmachine: STDOUT: Image resized.
	
	I0821 04:25:52.004341    4122 main.go:141] libmachine: STDERR: 
	I0821 04:25:52.004354    4122 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17102-920/.minikube/machines/offline-docker-481000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17102-920/.minikube/machines/offline-docker-481000/disk.qcow2
	I0821 04:25:52.004361    4122 main.go:141] libmachine: Starting QEMU VM...
	I0821 04:25:52.004397    4122 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17102-920/.minikube/machines/offline-docker-481000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/offline-docker-481000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17102-920/.minikube/machines/offline-docker-481000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:05:08:1c:99:38 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17102-920/.minikube/machines/offline-docker-481000/disk.qcow2
	I0821 04:25:52.005806    4122 main.go:141] libmachine: STDOUT: 
	I0821 04:25:52.005820    4122 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0821 04:25:52.005830    4122 client.go:171] LocalClient.Create took 264.808791ms
	I0821 04:25:54.007986    4122 start.go:128] duration metric: createHost completed in 2.324202958s
	I0821 04:25:54.008076    4122 start.go:83] releasing machines lock for "offline-docker-481000", held for 2.324709s
	W0821 04:25:54.008636    4122 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-481000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-481000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0821 04:25:54.016925    4122 out.go:177] 
	W0821 04:25:54.021611    4122 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0821 04:25:54.021640    4122 out.go:239] * 
	* 
	W0821 04:25:54.024719    4122 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0821 04:25:54.033359    4122 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-481000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:522: *** TestOffline FAILED at 2023-08-21 04:25:54.051884 -0700 PDT m=+3159.063988376
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-481000 -n offline-docker-481000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-481000 -n offline-docker-481000: exit status 7 (65.196458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-481000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-481000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-481000
--- FAIL: TestOffline (9.92s)

                                                
                                    
x
+
TestAddons/parallel/Registry (720.95s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:304: failed waiting for registry replicacontroller to stabilize: timed out waiting for the condition
addons_test.go:306: registry stabilized in 6m0.001489s
addons_test.go:308: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
addons_test.go:308: ***** TestAddons/parallel/Registry: pod "actual-registry=true" failed to start within 6m0s: context deadline exceeded ****
addons_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p addons-500000 -n addons-500000
addons_test.go:308: TestAddons/parallel/Registry: showing logs for failed pods as of 2023-08-21 03:52:32.635124 -0700 PDT m=+1157.662089001
addons_test.go:309: failed waiting for pod actual-registry: actual-registry=true within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p addons-500000 -n addons-500000
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p addons-500000 logs -n 25
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-670000 | jenkins | v1.31.2 | 21 Aug 23 03:33 PDT |                     |
	|         | -p download-only-670000           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=qemu2                    |                      |         |         |                     |                     |
	| start   | -o=json --download-only           | download-only-670000 | jenkins | v1.31.2 | 21 Aug 23 03:33 PDT |                     |
	|         | -p download-only-670000           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.4      |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=qemu2                    |                      |         |         |                     |                     |
	| start   | -o=json --download-only           | download-only-670000 | jenkins | v1.31.2 | 21 Aug 23 03:33 PDT |                     |
	|         | -p download-only-670000           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.0-rc.1 |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=qemu2                    |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.31.2 | 21 Aug 23 03:33 PDT | 21 Aug 23 03:33 PDT |
	| delete  | -p download-only-670000           | download-only-670000 | jenkins | v1.31.2 | 21 Aug 23 03:33 PDT | 21 Aug 23 03:33 PDT |
	| delete  | -p download-only-670000           | download-only-670000 | jenkins | v1.31.2 | 21 Aug 23 03:33 PDT | 21 Aug 23 03:33 PDT |
	| start   | --download-only -p                | binary-mirror-462000 | jenkins | v1.31.2 | 21 Aug 23 03:33 PDT |                     |
	|         | binary-mirror-462000              |                      |         |         |                     |                     |
	|         | --alsologtostderr                 |                      |         |         |                     |                     |
	|         | --binary-mirror                   |                      |         |         |                     |                     |
	|         | http://127.0.0.1:49329            |                      |         |         |                     |                     |
	|         | --driver=qemu2                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-462000           | binary-mirror-462000 | jenkins | v1.31.2 | 21 Aug 23 03:33 PDT | 21 Aug 23 03:33 PDT |
	| start   | -p addons-500000                  | addons-500000        | jenkins | v1.31.2 | 21 Aug 23 03:33 PDT | 21 Aug 23 03:40 PDT |
	|         | --wait=true --memory=4000         |                      |         |         |                     |                     |
	|         | --alsologtostderr                 |                      |         |         |                     |                     |
	|         | --addons=registry                 |                      |         |         |                     |                     |
	|         | --addons=metrics-server           |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots          |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver      |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                 |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner            |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget         |                      |         |         |                     |                     |
	|         | --driver=qemu2                    |                      |         |         |                     |                     |
	|         | --addons=ingress                  |                      |         |         |                     |                     |
	|         | --addons=ingress-dns              |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p          | addons-500000        | jenkins | v1.31.2 | 21 Aug 23 03:52 PDT |                     |
	|         | addons-500000                     |                      |         |         |                     |                     |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/21 03:33:48
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.20.7 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0821 03:33:48.415064    1442 out.go:296] Setting OutFile to fd 1 ...
	I0821 03:33:48.415176    1442 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 03:33:48.415179    1442 out.go:309] Setting ErrFile to fd 2...
	I0821 03:33:48.415182    1442 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 03:33:48.415284    1442 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17102-920/.minikube/bin
	I0821 03:33:48.416485    1442 out.go:303] Setting JSON to false
	I0821 03:33:48.431675    1442 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":202,"bootTime":1692613826,"procs":392,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0821 03:33:48.431757    1442 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0821 03:33:48.436776    1442 out.go:177] * [addons-500000] minikube v1.31.2 on Darwin 13.5 (arm64)
	I0821 03:33:48.443786    1442 out.go:177]   - MINIKUBE_LOCATION=17102
	I0821 03:33:48.443817    1442 notify.go:220] Checking for updates...
	I0821 03:33:48.452754    1442 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17102-920/kubeconfig
	I0821 03:33:48.459793    1442 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0821 03:33:48.466761    1442 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0821 03:33:48.469754    1442 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17102-920/.minikube
	I0821 03:33:48.472801    1442 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0821 03:33:48.476845    1442 driver.go:373] Setting default libvirt URI to qemu:///system
	I0821 03:33:48.479685    1442 out.go:177] * Using the qemu2 driver based on user configuration
	I0821 03:33:48.486794    1442 start.go:298] selected driver: qemu2
	I0821 03:33:48.486801    1442 start.go:902] validating driver "qemu2" against <nil>
	I0821 03:33:48.486809    1442 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0821 03:33:48.488928    1442 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0821 03:33:48.491687    1442 out.go:177] * Automatically selected the socket_vmnet network
	I0821 03:33:48.495787    1442 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0821 03:33:48.495806    1442 cni.go:84] Creating CNI manager for ""
	I0821 03:33:48.495814    1442 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0821 03:33:48.495818    1442 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0821 03:33:48.495823    1442 start_flags.go:319] config:
	{Name:addons-500000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:addons-500000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:c
ni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0821 03:33:48.500226    1442 iso.go:125] acquiring lock: {Name:mk813ea611542195bb0511881888be3fabc72ff6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 03:33:48.506762    1442 out.go:177] * Starting control plane node addons-500000 in cluster addons-500000
	I0821 03:33:48.510761    1442 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime docker
	I0821 03:33:48.510781    1442 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17102-920/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4
	I0821 03:33:48.510799    1442 cache.go:57] Caching tarball of preloaded images
	I0821 03:33:48.510861    1442 preload.go:174] Found /Users/jenkins/minikube-integration/17102-920/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0821 03:33:48.510867    1442 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.4 on docker
	I0821 03:33:48.511057    1442 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/config.json ...
	I0821 03:33:48.511069    1442 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/config.json: {Name:mke6ea6a330608889e821054234e4dab41e05376 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 03:33:48.511283    1442 start.go:365] acquiring machines lock for addons-500000: {Name:mk9b32d9fe994be32d77812db464b2cfa7bfb400 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0821 03:33:48.511397    1442 start.go:369] acquired machines lock for "addons-500000" in 109.25µs
	I0821 03:33:48.511409    1442 start.go:93] Provisioning new machine with config: &{Name:addons-500000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:
addons-500000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0821 03:33:48.511444    1442 start.go:125] createHost starting for "" (driver="qemu2")
	I0821 03:33:48.515777    1442 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0821 03:33:48.825711    1442 start.go:159] libmachine.API.Create for "addons-500000" (driver="qemu2")
	I0821 03:33:48.825759    1442 client.go:168] LocalClient.Create starting
	I0821 03:33:48.825907    1442 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca.pem
	I0821 03:33:48.926786    1442 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/17102-920/.minikube/certs/cert.pem
	I0821 03:33:49.005435    1442 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17102-920/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17102-920/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0821 03:33:49.429478    1442 main.go:141] libmachine: Creating SSH key...
	I0821 03:33:49.603069    1442 main.go:141] libmachine: Creating Disk image...
	I0821 03:33:49.603078    1442 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0821 03:33:49.603290    1442 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/disk.qcow2.raw /Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/disk.qcow2
	I0821 03:33:49.637224    1442 main.go:141] libmachine: STDOUT: 
	I0821 03:33:49.637249    1442 main.go:141] libmachine: STDERR: 
	I0821 03:33:49.637377    1442 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/disk.qcow2 +20000M
	I0821 03:33:49.644766    1442 main.go:141] libmachine: STDOUT: Image resized.
	
	I0821 03:33:49.644778    1442 main.go:141] libmachine: STDERR: 
	I0821 03:33:49.644801    1442 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/disk.qcow2
	I0821 03:33:49.644808    1442 main.go:141] libmachine: Starting QEMU VM...
	I0821 03:33:49.644850    1442 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:15:38:20:81:6d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/disk.qcow2
	I0821 03:33:49.712858    1442 main.go:141] libmachine: STDOUT: 
	I0821 03:33:49.712896    1442 main.go:141] libmachine: STDERR: 
	I0821 03:33:49.712900    1442 main.go:141] libmachine: Attempt 0
	I0821 03:33:49.712923    1442 main.go:141] libmachine: Searching for 5e:15:38:20:81:6d in /var/db/dhcpd_leases ...
	I0821 03:33:51.714037    1442 main.go:141] libmachine: Attempt 1
	I0821 03:33:51.714122    1442 main.go:141] libmachine: Searching for 5e:15:38:20:81:6d in /var/db/dhcpd_leases ...
	I0821 03:33:53.715339    1442 main.go:141] libmachine: Attempt 2
	I0821 03:33:53.715370    1442 main.go:141] libmachine: Searching for 5e:15:38:20:81:6d in /var/db/dhcpd_leases ...
	I0821 03:33:55.716394    1442 main.go:141] libmachine: Attempt 3
	I0821 03:33:55.716406    1442 main.go:141] libmachine: Searching for 5e:15:38:20:81:6d in /var/db/dhcpd_leases ...
	I0821 03:33:57.717443    1442 main.go:141] libmachine: Attempt 4
	I0821 03:33:57.717472    1442 main.go:141] libmachine: Searching for 5e:15:38:20:81:6d in /var/db/dhcpd_leases ...
	I0821 03:33:59.718558    1442 main.go:141] libmachine: Attempt 5
	I0821 03:33:59.718579    1442 main.go:141] libmachine: Searching for 5e:15:38:20:81:6d in /var/db/dhcpd_leases ...
	I0821 03:34:01.719634    1442 main.go:141] libmachine: Attempt 6
	I0821 03:34:01.719657    1442 main.go:141] libmachine: Searching for 5e:15:38:20:81:6d in /var/db/dhcpd_leases ...
	I0821 03:34:01.719810    1442 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0821 03:34:01.719849    1442 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:5e:15:38:20:81:6d ID:1,5e:15:38:20:81:6d Lease:0x64e48f18}
	I0821 03:34:01.719855    1442 main.go:141] libmachine: Found match: 5e:15:38:20:81:6d
	I0821 03:34:01.719867    1442 main.go:141] libmachine: IP: 192.168.105.2
	I0821 03:34:01.719873    1442 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.2)...
	I0821 03:34:03.738025    1442 machine.go:88] provisioning docker machine ...
	I0821 03:34:03.738086    1442 buildroot.go:166] provisioning hostname "addons-500000"
	I0821 03:34:03.739549    1442 main.go:141] libmachine: Using SSH client type: native
	I0821 03:34:03.740347    1442 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102aae1e0] 0x102ab0c40 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0821 03:34:03.740367    1442 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-500000 && echo "addons-500000" | sudo tee /etc/hostname
	I0821 03:34:03.826570    1442 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-500000
	
	I0821 03:34:03.826696    1442 main.go:141] libmachine: Using SSH client type: native
	I0821 03:34:03.827174    1442 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102aae1e0] 0x102ab0c40 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0821 03:34:03.827189    1442 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-500000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-500000/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-500000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0821 03:34:03.891757    1442 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0821 03:34:03.891772    1442 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17102-920/.minikube CaCertPath:/Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17102-920/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17102-920/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17102-920/.minikube}
	I0821 03:34:03.891782    1442 buildroot.go:174] setting up certificates
	I0821 03:34:03.891796    1442 provision.go:83] configureAuth start
	I0821 03:34:03.891801    1442 provision.go:138] copyHostCerts
	I0821 03:34:03.891982    1442 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17102-920/.minikube/ca.pem (1078 bytes)
	I0821 03:34:03.892356    1442 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17102-920/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17102-920/.minikube/cert.pem (1123 bytes)
	I0821 03:34:03.892494    1442 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17102-920/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17102-920/.minikube/key.pem (1679 bytes)
	I0821 03:34:03.892606    1442 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17102-920/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca-key.pem org=jenkins.addons-500000 san=[192.168.105.2 192.168.105.2 localhost 127.0.0.1 minikube addons-500000]
	I0821 03:34:04.055231    1442 provision.go:172] copyRemoteCerts
	I0821 03:34:04.055290    1442 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0821 03:34:04.055299    1442 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/id_rsa Username:docker}
	I0821 03:34:04.085022    1442 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0821 03:34:04.091757    1442 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0821 03:34:04.098302    1442 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0821 03:34:04.105297    1442 provision.go:86] duration metric: configureAuth took 213.489792ms
	I0821 03:34:04.105304    1442 buildroot.go:189] setting minikube options for container-runtime
	I0821 03:34:04.105410    1442 config.go:182] Loaded profile config "addons-500000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0821 03:34:04.105443    1442 main.go:141] libmachine: Using SSH client type: native
	I0821 03:34:04.105658    1442 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102aae1e0] 0x102ab0c40 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0821 03:34:04.105665    1442 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0821 03:34:04.160033    1442 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0821 03:34:04.160039    1442 buildroot.go:70] root file system type: tmpfs
	I0821 03:34:04.160095    1442 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0821 03:34:04.160145    1442 main.go:141] libmachine: Using SSH client type: native
	I0821 03:34:04.160376    1442 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102aae1e0] 0x102ab0c40 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0821 03:34:04.160410    1442 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0821 03:34:04.217511    1442 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0821 03:34:04.217555    1442 main.go:141] libmachine: Using SSH client type: native
	I0821 03:34:04.217777    1442 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102aae1e0] 0x102ab0c40 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0821 03:34:04.217788    1442 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0821 03:34:04.516566    1442 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0821 03:34:04.516576    1442 machine.go:91] provisioned docker machine in 778.543875ms
	I0821 03:34:04.516581    1442 client.go:171] LocalClient.Create took 15.691254833s
	I0821 03:34:04.516600    1442 start.go:167] duration metric: libmachine.API.Create for "addons-500000" took 15.691329875s
	I0821 03:34:04.516605    1442 start.go:300] post-start starting for "addons-500000" (driver="qemu2")
	I0821 03:34:04.516610    1442 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0821 03:34:04.516676    1442 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0821 03:34:04.516684    1442 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/id_rsa Username:docker}
	I0821 03:34:04.547645    1442 ssh_runner.go:195] Run: cat /etc/os-release
	I0821 03:34:04.548977    1442 info.go:137] Remote host: Buildroot 2021.02.12
	I0821 03:34:04.548988    1442 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17102-920/.minikube/addons for local assets ...
	I0821 03:34:04.549067    1442 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17102-920/.minikube/files for local assets ...
	I0821 03:34:04.549094    1442 start.go:303] post-start completed in 32.487208ms
	I0821 03:34:04.549503    1442 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/config.json ...
	I0821 03:34:04.549671    1442 start.go:128] duration metric: createHost completed in 16.038665083s
	I0821 03:34:04.549713    1442 main.go:141] libmachine: Using SSH client type: native
	I0821 03:34:04.549937    1442 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102aae1e0] 0x102ab0c40 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0821 03:34:04.549942    1442 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0821 03:34:04.603319    1442 main.go:141] libmachine: SSH cmd err, output: <nil>: 1692614044.503149419
	
	I0821 03:34:04.603325    1442 fix.go:206] guest clock: 1692614044.503149419
	I0821 03:34:04.603329    1442 fix.go:219] Guest: 2023-08-21 03:34:04.503149419 -0700 PDT Remote: 2023-08-21 03:34:04.549674 -0700 PDT m=+16.153755168 (delta=-46.524581ms)
	I0821 03:34:04.603340    1442 fix.go:190] guest clock delta is within tolerance: -46.524581ms
	I0821 03:34:04.603349    1442 start.go:83] releasing machines lock for "addons-500000", held for 16.092394834s
	I0821 03:34:04.603625    1442 ssh_runner.go:195] Run: cat /version.json
	I0821 03:34:04.603635    1442 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/id_rsa Username:docker}
	I0821 03:34:04.603639    1442 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0821 03:34:04.603685    1442 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/id_rsa Username:docker}
	I0821 03:34:04.631400    1442 ssh_runner.go:195] Run: systemctl --version
	I0821 03:34:04.633303    1442 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0821 03:34:04.675003    1442 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0821 03:34:04.675044    1442 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0821 03:34:04.680093    1442 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0821 03:34:04.680102    1442 start.go:466] detecting cgroup driver to use...
	I0821 03:34:04.680217    1442 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0821 03:34:04.685575    1442 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0821 03:34:04.689003    1442 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0821 03:34:04.692463    1442 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0821 03:34:04.692496    1442 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0821 03:34:04.695492    1442 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0821 03:34:04.698438    1442 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0821 03:34:04.701779    1442 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0821 03:34:04.705308    1442 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0821 03:34:04.708997    1442 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0821 03:34:04.712485    1442 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0821 03:34:04.715157    1442 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0821 03:34:04.718062    1442 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0821 03:34:04.801182    1442 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0821 03:34:04.809752    1442 start.go:466] detecting cgroup driver to use...
	I0821 03:34:04.809829    1442 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0821 03:34:04.815491    1442 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0821 03:34:04.820439    1442 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0821 03:34:04.826330    1442 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0821 03:34:04.831197    1442 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0821 03:34:04.835955    1442 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0821 03:34:04.893707    1442 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0821 03:34:04.899704    1442 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0821 03:34:04.905738    1442 ssh_runner.go:195] Run: which cri-dockerd
	I0821 03:34:04.907314    1442 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0821 03:34:04.910018    1442 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0821 03:34:04.915159    1442 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0821 03:34:04.993497    1442 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0821 03:34:05.073322    1442 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0821 03:34:05.073337    1442 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0821 03:34:05.078736    1442 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0821 03:34:05.148942    1442 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0821 03:34:06.310888    1442 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.161962625s)
	I0821 03:34:06.310946    1442 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0821 03:34:06.389910    1442 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0821 03:34:06.470512    1442 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0821 03:34:06.540771    1442 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0821 03:34:06.608028    1442 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0821 03:34:06.614951    1442 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0821 03:34:06.680856    1442 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0821 03:34:06.705016    1442 start.go:513] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0821 03:34:06.705100    1442 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0821 03:34:06.707492    1442 start.go:534] Will wait 60s for crictl version
	I0821 03:34:06.707526    1442 ssh_runner.go:195] Run: which crictl
	I0821 03:34:06.708906    1442 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0821 03:34:06.723485    1442 start.go:550] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.4
	RuntimeApiVersion:  v1alpha2
	I0821 03:34:06.723553    1442 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0821 03:34:06.733136    1442 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0821 03:34:06.752243    1442 out.go:204] * Preparing Kubernetes v1.27.4 on Docker 24.0.4 ...
	I0821 03:34:06.752395    1442 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0821 03:34:06.753728    1442 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0821 03:34:06.757671    1442 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime docker
	I0821 03:34:06.757717    1442 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0821 03:34:06.767699    1442 docker.go:636] Got preloaded images: 
	I0821 03:34:06.767706    1442 docker.go:642] registry.k8s.io/kube-apiserver:v1.27.4 wasn't preloaded
	I0821 03:34:06.767758    1442 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0821 03:34:06.770623    1442 ssh_runner.go:195] Run: which lz4
	I0821 03:34:06.772016    1442 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0821 03:34:06.773407    1442 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0821 03:34:06.773426    1442 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (343658271 bytes)
	I0821 03:34:08.065715    1442 docker.go:600] Took 1.293779 seconds to copy over tarball
	I0821 03:34:08.065776    1442 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0821 03:34:09.083194    1442 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.017432542s)
	I0821 03:34:09.083208    1442 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0821 03:34:09.098174    1442 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0821 03:34:09.101758    1442 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0821 03:34:09.107271    1442 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0821 03:34:09.185186    1442 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0821 03:34:11.583398    1442 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.398262792s)
	I0821 03:34:11.583497    1442 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0821 03:34:11.599112    1442 docker.go:636] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.4
	registry.k8s.io/kube-controller-manager:v1.27.4
	registry.k8s.io/kube-scheduler:v1.27.4
	registry.k8s.io/kube-proxy:v1.27.4
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0821 03:34:11.599121    1442 cache_images.go:84] Images are preloaded, skipping loading
	I0821 03:34:11.599173    1442 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0821 03:34:11.606813    1442 cni.go:84] Creating CNI manager for ""
	I0821 03:34:11.606822    1442 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0821 03:34:11.606852    1442 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0821 03:34:11.606862    1442 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.2 APIServerPort:8443 KubernetesVersion:v1.27.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-500000 NodeName:addons-500000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0821 03:34:11.606930    1442 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-500000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0821 03:34:11.606959    1442 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=addons-500000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.4 ClusterName:addons-500000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0821 03:34:11.607013    1442 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.4
	I0821 03:34:11.609958    1442 binaries.go:44] Found k8s binaries, skipping transfer
	I0821 03:34:11.609992    1442 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0821 03:34:11.613080    1442 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I0821 03:34:11.618135    1442 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0821 03:34:11.623217    1442 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2099 bytes)
	I0821 03:34:11.628067    1442 ssh_runner.go:195] Run: grep 192.168.105.2	control-plane.minikube.internal$ /etc/hosts
	I0821 03:34:11.629338    1442 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0821 03:34:11.633264    1442 certs.go:56] Setting up /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000 for IP: 192.168.105.2
	I0821 03:34:11.633272    1442 certs.go:190] acquiring lock for shared ca certs: {Name:mkaf8bee91c9bef113528e728629bac5c142d5d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 03:34:11.633419    1442 certs.go:204] generating minikubeCA CA: /Users/jenkins/minikube-integration/17102-920/.minikube/ca.key
	I0821 03:34:11.709497    1442 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17102-920/.minikube/ca.crt ...
	I0821 03:34:11.709504    1442 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17102-920/.minikube/ca.crt: {Name:mk11304afc04d282dffa1bbfafecb7763b86f0d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 03:34:11.709741    1442 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17102-920/.minikube/ca.key ...
	I0821 03:34:11.709747    1442 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17102-920/.minikube/ca.key: {Name:mk7632addcfceaabe09bce428c8dd59051132a6a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 03:34:11.709856    1442 certs.go:204] generating proxyClientCA CA: /Users/jenkins/minikube-integration/17102-920/.minikube/proxy-client-ca.key
	I0821 03:34:11.928292    1442 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17102-920/.minikube/proxy-client-ca.crt ...
	I0821 03:34:11.928298    1442 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17102-920/.minikube/proxy-client-ca.crt: {Name:mk59ba2d6f1e462ee2e456d21a76e6acaba82b70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 03:34:11.928531    1442 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17102-920/.minikube/proxy-client-ca.key ...
	I0821 03:34:11.928534    1442 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17102-920/.minikube/proxy-client-ca.key: {Name:mk02c96134c44ce7714696be07e0b5c22f58dc64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 03:34:11.928684    1442 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/client.key
	I0821 03:34:11.928691    1442 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/client.crt with IP's: []
	I0821 03:34:12.116170    1442 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/client.crt ...
	I0821 03:34:12.116177    1442 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/client.crt: {Name:mk3182b685506ec2dbfcad41054e3ffc2bf0f3b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 03:34:12.116379    1442 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/client.key ...
	I0821 03:34:12.116384    1442 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/client.key: {Name:mk087ee0a568a92e1e97ae6eb06dd6604454b2e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 03:34:12.116489    1442 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/apiserver.key.96055969
	I0821 03:34:12.116499    1442 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/apiserver.crt.96055969 with IP's: [192.168.105.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0821 03:34:12.174634    1442 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/apiserver.crt.96055969 ...
	I0821 03:34:12.174637    1442 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/apiserver.crt.96055969: {Name:mk02f137a3a75334a28e6811666f6d1dde47709c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 03:34:12.174771    1442 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/apiserver.key.96055969 ...
	I0821 03:34:12.174774    1442 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/apiserver.key.96055969: {Name:mk629f60ce1370d0aadb852a255428713cef631b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 03:34:12.174873    1442 certs.go:337] copying /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/apiserver.crt.96055969 -> /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/apiserver.crt
	I0821 03:34:12.175028    1442 certs.go:341] copying /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/apiserver.key.96055969 -> /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/apiserver.key
	I0821 03:34:12.175114    1442 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/proxy-client.key
	I0821 03:34:12.175123    1442 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/proxy-client.crt with IP's: []
	I0821 03:34:12.291172    1442 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/proxy-client.crt ...
	I0821 03:34:12.291175    1442 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/proxy-client.crt: {Name:mk4861ba5de37ed8d82543663b167ed0e04664dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 03:34:12.291331    1442 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/proxy-client.key ...
	I0821 03:34:12.291334    1442 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/proxy-client.key: {Name:mk5eb1fb206858f7f6262a3b86ec8673fdeb4399 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 03:34:12.291586    1442 certs.go:437] found cert: /Users/jenkins/minikube-integration/17102-920/.minikube/certs/Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca-key.pem (1679 bytes)
	I0821 03:34:12.291611    1442 certs.go:437] found cert: /Users/jenkins/minikube-integration/17102-920/.minikube/certs/Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca.pem (1078 bytes)
	I0821 03:34:12.291633    1442 certs.go:437] found cert: /Users/jenkins/minikube-integration/17102-920/.minikube/certs/Users/jenkins/minikube-integration/17102-920/.minikube/certs/cert.pem (1123 bytes)
	I0821 03:34:12.291654    1442 certs.go:437] found cert: /Users/jenkins/minikube-integration/17102-920/.minikube/certs/Users/jenkins/minikube-integration/17102-920/.minikube/certs/key.pem (1679 bytes)
	I0821 03:34:12.292029    1442 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0821 03:34:12.300489    1442 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0821 03:34:12.307765    1442 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0821 03:34:12.314499    1442 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0821 03:34:12.321449    1442 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0821 03:34:12.328965    1442 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0821 03:34:12.336085    1442 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0821 03:34:12.342676    1442 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0821 03:34:12.349529    1442 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0821 03:34:12.356907    1442 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0821 03:34:12.363000    1442 ssh_runner.go:195] Run: openssl version
	I0821 03:34:12.364943    1442 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0821 03:34:12.368659    1442 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0821 03:34:12.370316    1442 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 21 10:34 /usr/share/ca-certificates/minikubeCA.pem
	I0821 03:34:12.370337    1442 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0821 03:34:12.372170    1442 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0821 03:34:12.375051    1442 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0821 03:34:12.376254    1442 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0821 03:34:12.376292    1442 kubeadm.go:404] StartCluster: {Name:addons-500000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:addons-500000 Namespac
e:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mount
UID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0821 03:34:12.376353    1442 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0821 03:34:12.381765    1442 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0821 03:34:12.385127    1442 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0821 03:34:12.388050    1442 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0821 03:34:12.390699    1442 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0821 03:34:12.390714    1442 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0821 03:34:12.412358    1442 kubeadm.go:322] [init] Using Kubernetes version: v1.27.4
	I0821 03:34:12.412390    1442 kubeadm.go:322] [preflight] Running pre-flight checks
	I0821 03:34:12.465080    1442 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0821 03:34:12.465135    1442 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0821 03:34:12.465183    1442 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0821 03:34:12.530098    1442 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0821 03:34:12.539343    1442 out.go:204]   - Generating certificates and keys ...
	I0821 03:34:12.539375    1442 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0821 03:34:12.539413    1442 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0821 03:34:12.639909    1442 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0821 03:34:12.680054    1442 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0821 03:34:12.714095    1442 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0821 03:34:12.849965    1442 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0821 03:34:12.996137    1442 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0821 03:34:12.996199    1442 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-500000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0821 03:34:13.141022    1442 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0821 03:34:13.141102    1442 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-500000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0821 03:34:13.228117    1442 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0821 03:34:13.409230    1442 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0821 03:34:13.774136    1442 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0821 03:34:13.774180    1442 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0821 03:34:13.866700    1442 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0821 03:34:13.977782    1442 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0821 03:34:14.068222    1442 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0821 03:34:14.144551    1442 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0821 03:34:14.151809    1442 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0821 03:34:14.152307    1442 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0821 03:34:14.152438    1442 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0821 03:34:14.228545    1442 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0821 03:34:14.232527    1442 out.go:204]   - Booting up control plane ...
	I0821 03:34:14.232575    1442 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0821 03:34:14.232614    1442 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0821 03:34:14.232645    1442 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0821 03:34:14.236440    1442 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0821 03:34:14.238376    1442 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0821 03:34:18.241227    1442 kubeadm.go:322] [apiclient] All control plane components are healthy after 4.002539 seconds
	I0821 03:34:18.241427    1442 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0821 03:34:18.252886    1442 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0821 03:34:18.774491    1442 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0821 03:34:18.774728    1442 kubeadm.go:322] [mark-control-plane] Marking the node addons-500000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0821 03:34:19.280325    1442 kubeadm.go:322] [bootstrap-token] Using token: jvxtql.8wgzhr7nb5g9o93n
	I0821 03:34:19.286479    1442 out.go:204]   - Configuring RBAC rules ...
	I0821 03:34:19.286537    1442 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0821 03:34:19.290363    1442 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0821 03:34:19.293121    1442 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0821 03:34:19.294256    1442 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0821 03:34:19.295736    1442 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0821 03:34:19.296773    1442 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0821 03:34:19.301173    1442 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0821 03:34:19.474355    1442 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0821 03:34:19.693544    1442 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0821 03:34:19.694011    1442 kubeadm.go:322] 
	I0821 03:34:19.694043    1442 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0821 03:34:19.694047    1442 kubeadm.go:322] 
	I0821 03:34:19.694084    1442 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0821 03:34:19.694086    1442 kubeadm.go:322] 
	I0821 03:34:19.694099    1442 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0821 03:34:19.694192    1442 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0821 03:34:19.694216    1442 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0821 03:34:19.694219    1442 kubeadm.go:322] 
	I0821 03:34:19.694251    1442 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0821 03:34:19.694263    1442 kubeadm.go:322] 
	I0821 03:34:19.694293    1442 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0821 03:34:19.694296    1442 kubeadm.go:322] 
	I0821 03:34:19.694320    1442 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0821 03:34:19.694360    1442 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0821 03:34:19.694390    1442 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0821 03:34:19.694394    1442 kubeadm.go:322] 
	I0821 03:34:19.694446    1442 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0821 03:34:19.694488    1442 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0821 03:34:19.694495    1442 kubeadm.go:322] 
	I0821 03:34:19.694535    1442 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token jvxtql.8wgzhr7nb5g9o93n \
	I0821 03:34:19.694617    1442 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:c361d9930575cb4141f86c9c696a425212668e350af0245a5e7de41b1bd48407 \
	I0821 03:34:19.694632    1442 kubeadm.go:322] 	--control-plane 
	I0821 03:34:19.694634    1442 kubeadm.go:322] 
	I0821 03:34:19.694684    1442 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0821 03:34:19.694688    1442 kubeadm.go:322] 
	I0821 03:34:19.694735    1442 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token jvxtql.8wgzhr7nb5g9o93n \
	I0821 03:34:19.694782    1442 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:c361d9930575cb4141f86c9c696a425212668e350af0245a5e7de41b1bd48407 
	I0821 03:34:19.694835    1442 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0821 03:34:19.694840    1442 cni.go:84] Creating CNI manager for ""
	I0821 03:34:19.694847    1442 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0821 03:34:19.703814    1442 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0821 03:34:19.707890    1442 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0821 03:34:19.711023    1442 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0821 03:34:19.716873    1442 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0821 03:34:19.716924    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:19.716951    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43 minikube.k8s.io/name=addons-500000 minikube.k8s.io/updated_at=2023_08_21T03_34_19_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:19.723924    1442 ops.go:34] apiserver oom_adj: -16
	I0821 03:34:19.767999    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:19.814902    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:20.352169    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:20.852188    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:21.352164    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:21.852123    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:22.352346    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:22.852184    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:23.352159    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:23.852279    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:24.352116    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:24.852182    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:25.352203    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:25.852083    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:26.352293    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:26.852062    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:27.352046    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:27.851991    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:28.352173    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:28.851976    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:29.352173    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:29.851943    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:30.352016    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:30.851904    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:31.351923    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:31.851905    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:32.351835    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:32.388500    1442 kubeadm.go:1081] duration metric: took 12.671972458s to wait for elevateKubeSystemPrivileges.
	I0821 03:34:32.388516    1442 kubeadm.go:406] StartCluster complete in 20.01278175s
	I0821 03:34:32.388525    1442 settings.go:142] acquiring lock: {Name:mkeb461ec3a6a92ee32ce41e8df63d6759cb2728 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 03:34:32.388680    1442 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/17102-920/kubeconfig
	I0821 03:34:32.388902    1442 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17102-920/kubeconfig: {Name:mk2bc9c64ad130c36a0253707ac2ba3f8fd22371 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 03:34:32.389107    1442 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0821 03:34:32.389147    1442 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:true]
	I0821 03:34:32.389221    1442 addons.go:69] Setting volumesnapshots=true in profile "addons-500000"
	I0821 03:34:32.389227    1442 addons.go:231] Setting addon volumesnapshots=true in "addons-500000"
	I0821 03:34:32.389225    1442 addons.go:69] Setting cloud-spanner=true in profile "addons-500000"
	I0821 03:34:32.389236    1442 addons.go:231] Setting addon cloud-spanner=true in "addons-500000"
	I0821 03:34:32.389251    1442 config.go:182] Loaded profile config "addons-500000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0821 03:34:32.389271    1442 host.go:66] Checking if "addons-500000" exists ...
	I0821 03:34:32.389279    1442 addons.go:69] Setting storage-provisioner=true in profile "addons-500000"
	I0821 03:34:32.389222    1442 addons.go:69] Setting gcp-auth=true in profile "addons-500000"
	I0821 03:34:32.389282    1442 addons.go:231] Setting addon storage-provisioner=true in "addons-500000"
	I0821 03:34:32.389288    1442 mustload.go:65] Loading cluster: addons-500000
	I0821 03:34:32.389299    1442 host.go:66] Checking if "addons-500000" exists ...
	I0821 03:34:32.389299    1442 addons.go:69] Setting inspektor-gadget=true in profile "addons-500000"
	I0821 03:34:32.389327    1442 addons.go:69] Setting registry=true in profile "addons-500000"
	I0821 03:34:32.389360    1442 config.go:182] Loaded profile config "addons-500000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0821 03:34:32.389358    1442 addons.go:69] Setting ingress-dns=true in profile "addons-500000"
	I0821 03:34:32.389378    1442 addons.go:231] Setting addon ingress-dns=true in "addons-500000"
	I0821 03:34:32.389273    1442 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-500000"
	I0821 03:34:32.389396    1442 addons.go:69] Setting ingress=true in profile "addons-500000"
	I0821 03:34:32.389434    1442 host.go:66] Checking if "addons-500000" exists ...
	I0821 03:34:32.389418    1442 addons.go:69] Setting metrics-server=true in profile "addons-500000"
	I0821 03:34:32.389454    1442 addons.go:231] Setting addon metrics-server=true in "addons-500000"
	I0821 03:34:32.389465    1442 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-500000"
	I0821 03:34:32.389506    1442 host.go:66] Checking if "addons-500000" exists ...
	I0821 03:34:32.389519    1442 host.go:66] Checking if "addons-500000" exists ...
	I0821 03:34:32.389271    1442 host.go:66] Checking if "addons-500000" exists ...
	W0821 03:34:32.389564    1442 host.go:54] host status for "addons-500000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/monitor: connect: connection refused
	W0821 03:34:32.389572    1442 addons.go:277] "addons-500000" is not running, setting storage-provisioner=true and skipping enablement (err=<nil>)
	I0821 03:34:32.389347    1442 addons.go:231] Setting addon inspektor-gadget=true in "addons-500000"
	I0821 03:34:32.389693    1442 host.go:66] Checking if "addons-500000" exists ...
	W0821 03:34:32.389757    1442 host.go:54] host status for "addons-500000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/monitor: connect: connection refused
	W0821 03:34:32.389767    1442 addons.go:277] "addons-500000" is not running, setting cloud-spanner=true and skipping enablement (err=<nil>)
	I0821 03:34:32.389367    1442 addons.go:231] Setting addon registry=true in "addons-500000"
	I0821 03:34:32.389786    1442 host.go:66] Checking if "addons-500000" exists ...
	W0821 03:34:32.389790    1442 host.go:54] host status for "addons-500000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/monitor: connect: connection refused
	W0821 03:34:32.389796    1442 addons.go:277] "addons-500000" is not running, setting metrics-server=true and skipping enablement (err=<nil>)
	I0821 03:34:32.389799    1442 addons.go:467] Verifying addon metrics-server=true in "addons-500000"
	W0821 03:34:32.389788    1442 host.go:54] host status for "addons-500000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/monitor: connect: connection refused
	W0821 03:34:32.389803    1442 addons.go:277] "addons-500000" is not running, setting csi-hostpath-driver=true and skipping enablement (err=<nil>)
	I0821 03:34:32.389805    1442 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-500000"
	I0821 03:34:32.389275    1442 addons.go:69] Setting default-storageclass=true in profile "addons-500000"
	I0821 03:34:32.394058    1442 out.go:177] * Verifying csi-hostpath-driver addon...
	I0821 03:34:32.389436    1442 addons.go:231] Setting addon ingress=true in "addons-500000"
	I0821 03:34:32.389868    1442 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-500000"
	W0821 03:34:32.389953    1442 host.go:54] host status for "addons-500000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/monitor: connect: connection refused
	W0821 03:34:32.390033    1442 host.go:54] host status for "addons-500000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/monitor: connect: connection refused
	W0821 03:34:32.390053    1442 host.go:54] host status for "addons-500000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/monitor: connect: connection refused
	I0821 03:34:32.390510    1442 host.go:66] Checking if "addons-500000" exists ...
	I0821 03:34:32.409190    1442 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	W0821 03:34:32.404296    1442 addons.go:277] "addons-500000" is not running, setting ingress-dns=true and skipping enablement (err=<nil>)
	W0821 03:34:32.404342    1442 addons.go:277] "addons-500000" is not running, setting inspektor-gadget=true and skipping enablement (err=<nil>)
	W0821 03:34:32.404346    1442 addons.go:277] "addons-500000" is not running, setting registry=true and skipping enablement (err=<nil>)
	I0821 03:34:32.404410    1442 host.go:66] Checking if "addons-500000" exists ...
	I0821 03:34:32.404764    1442 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0821 03:34:32.413218    1442 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0821 03:34:32.413224    1442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0821 03:34:32.413232    1442 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/id_rsa Username:docker}
	I0821 03:34:32.413266    1442 addons.go:467] Verifying addon registry=true in "addons-500000"
	I0821 03:34:32.418274    1442 out.go:177] * Verifying registry addon...
	I0821 03:34:32.419795    1442 addons.go:231] Setting addon default-storageclass=true in "addons-500000"
	I0821 03:34:32.419868    1442 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-500000" context rescaled to 1 replicas
	I0821 03:34:32.420817    1442 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0821 03:34:32.421498    1442 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0821 03:34:32.421694    1442 host.go:66] Checking if "addons-500000" exists ...
	I0821 03:34:32.421701    1442 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0821 03:34:32.421849    1442 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0821 03:34:32.431173    1442 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0821 03:34:32.440212    1442 out.go:177] * Verifying Kubernetes components...
	I0821 03:34:32.431974    1442 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0821 03:34:32.435186    1442 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=registry
	I0821 03:34:32.444202    1442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0821 03:34:32.444209    1442 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0821 03:34:32.447466    1442 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0821 03:34:32.448196    1442 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.8.1
	I0821 03:34:32.448211    1442 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/id_rsa Username:docker}
	I0821 03:34:32.451292    1442 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0821 03:34:32.451299    1442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16083 bytes)
	I0821 03:34:32.451306    1442 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/id_rsa Username:docker}
	I0821 03:34:32.454351    1442 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0821 03:34:32.454358    1442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0821 03:34:32.485876    1442 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0821 03:34:32.485886    1442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0821 03:34:32.513135    1442 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0821 03:34:32.513147    1442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0821 03:34:32.532036    1442 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0821 03:34:32.532052    1442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0821 03:34:32.537566    1442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0821 03:34:32.542495    1442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0821 03:34:32.548533    1442 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0821 03:34:32.548541    1442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0821 03:34:32.568087    1442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0821 03:34:33.517324    1442 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.069159875s)
	I0821 03:34:33.517338    1442 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.069147125s)
	I0821 03:34:33.517342    1442 start.go:901] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I0821 03:34:33.517808    1442 node_ready.go:35] waiting up to 6m0s for node "addons-500000" to be "Ready" ...
	I0821 03:34:33.519592    1442 node_ready.go:49] node "addons-500000" has status "Ready":"True"
	I0821 03:34:33.519599    1442 node_ready.go:38] duration metric: took 1.779708ms waiting for node "addons-500000" to be "Ready" ...
	I0821 03:34:33.519602    1442 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0821 03:34:33.522687    1442 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-97rp7" in "kube-system" namespace to be "Ready" ...
	I0821 03:34:33.964195    1442 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.421717084s)
	I0821 03:34:33.964211    1442 addons.go:467] Verifying addon ingress=true in "addons-500000"
	I0821 03:34:33.968723    1442 out.go:177] * Verifying ingress addon...
	I0821 03:34:33.964338    1442 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.396275834s)
	W0821 03:34:33.968774    1442 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0821 03:34:33.975741    1442 retry.go:31] will retry after 231.591556ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0821 03:34:33.976141    1442 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0821 03:34:33.984299    1442 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0821 03:34:33.984307    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:33.987720    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:34.207434    1442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0821 03:34:34.491123    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:34.991180    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:35.490538    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:35.534205    1442 pod_ready.go:102] pod "coredns-5d78c9869d-97rp7" in "kube-system" namespace has status "Ready":"False"
	I0821 03:34:35.990628    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:36.490998    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:36.745839    1442 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.5384555s)
	I0821 03:34:36.990793    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:37.491119    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:37.534210    1442 pod_ready.go:102] pod "coredns-5d78c9869d-97rp7" in "kube-system" namespace has status "Ready":"False"
	I0821 03:34:37.990643    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:38.490772    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:38.997287    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:39.008172    1442 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0821 03:34:39.008186    1442 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/id_rsa Username:docker}
	I0821 03:34:39.055480    1442 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0821 03:34:39.064828    1442 addons.go:231] Setting addon gcp-auth=true in "addons-500000"
	I0821 03:34:39.064858    1442 host.go:66] Checking if "addons-500000" exists ...
	I0821 03:34:39.065649    1442 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0821 03:34:39.065660    1442 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/id_rsa Username:docker}
	I0821 03:34:39.100776    1442 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0821 03:34:39.103705    1442 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0821 03:34:39.107726    1442 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0821 03:34:39.107734    1442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0821 03:34:39.113078    1442 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0821 03:34:39.113087    1442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0821 03:34:39.127541    1442 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0821 03:34:39.127551    1442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5412 bytes)
	I0821 03:34:39.133486    1442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0821 03:34:39.491109    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:39.534694    1442 pod_ready.go:102] pod "coredns-5d78c9869d-97rp7" in "kube-system" namespace has status "Ready":"False"
	I0821 03:34:39.629710    1442 addons.go:467] Verifying addon gcp-auth=true in "addons-500000"
	I0821 03:34:39.641410    1442 out.go:177] * Verifying gcp-auth addon...
	I0821 03:34:39.650441    1442 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0821 03:34:39.656554    1442 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0821 03:34:39.656563    1442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 03:34:39.658191    1442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 03:34:39.991177    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:40.161154    1442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 03:34:40.492443    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:40.660810    1442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 03:34:40.990558    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:41.161357    1442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 03:34:41.492269    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:41.534695    1442 pod_ready.go:102] pod "coredns-5d78c9869d-97rp7" in "kube-system" namespace has status "Ready":"False"
	I0821 03:34:41.660947    1442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 03:34:41.990678    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:42.161013    1442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 03:34:42.490658    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:42.660884    1442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 03:34:42.990530    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:43.161042    1442 kapi.go:107] duration metric: took 3.510698166s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0821 03:34:43.165184    1442 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-500000 cluster.
	I0821 03:34:43.169238    1442 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0821 03:34:43.173158    1442 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0821 03:34:43.491145    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:43.534713    1442 pod_ready.go:97] pod "coredns-5d78c9869d-97rp7" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-08-21 03:34:32 -0700 PDT Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-08-21 03:34:32 -0700 PDT Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-08-21 03:34:32 -0700 PDT Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-08-21 03:34:32 -0700 PDT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.105.2 HostIPs:[] PodIP: PodIPs:[] StartTime:2023-08-21 03:34:32 -0700 PDT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerSt
ateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2023-08-21 03:34:33 -0700 PDT,FinishedAt:2023-08-21 03:34:43 -0700 PDT,ContainerID:docker://d9032391cb53f0fa8cfd4e1696eef2d7eb7096ba08423fd5087bb7b4d2fba5ed,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e ContainerID:docker://d9032391cb53f0fa8cfd4e1696eef2d7eb7096ba08423fd5087bb7b4d2fba5ed Started:0x140018d39a0 AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0821 03:34:43.534727    1442 pod_ready.go:81] duration metric: took 10.012309458s waiting for pod "coredns-5d78c9869d-97rp7" in "kube-system" namespace to be "Ready" ...
	E0821 03:34:43.534732    1442 pod_ready.go:66] WaitExtra: waitPodCondition: pod "coredns-5d78c9869d-97rp7" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-08-21 03:34:32 -0700 PDT Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-08-21 03:34:32 -0700 PDT Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-08-21 03:34:32 -0700 PDT Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-08-21 03:34:32 -0700 PDT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.105.2 HostIPs:[] PodIP: PodIPs:[] StartTime:2023-08-21 03:34:32 -0700 PDT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Runnin
g:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2023-08-21 03:34:33 -0700 PDT,FinishedAt:2023-08-21 03:34:43 -0700 PDT,ContainerID:docker://d9032391cb53f0fa8cfd4e1696eef2d7eb7096ba08423fd5087bb7b4d2fba5ed,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e ContainerID:docker://d9032391cb53f0fa8cfd4e1696eef2d7eb7096ba08423fd5087bb7b4d2fba5ed Started:0x140018d39a0 AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0821 03:34:43.534736    1442 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-hbg44" in "kube-system" namespace to be "Ready" ...
	I0821 03:34:43.537136    1442 pod_ready.go:92] pod "coredns-5d78c9869d-hbg44" in "kube-system" namespace has status "Ready":"True"
	I0821 03:34:43.537140    1442 pod_ready.go:81] duration metric: took 2.400375ms waiting for pod "coredns-5d78c9869d-hbg44" in "kube-system" namespace to be "Ready" ...
	I0821 03:34:43.537145    1442 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-500000" in "kube-system" namespace to be "Ready" ...
	I0821 03:34:43.539758    1442 pod_ready.go:92] pod "etcd-addons-500000" in "kube-system" namespace has status "Ready":"True"
	I0821 03:34:43.539762    1442 pod_ready.go:81] duration metric: took 2.614916ms waiting for pod "etcd-addons-500000" in "kube-system" namespace to be "Ready" ...
	I0821 03:34:43.539766    1442 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-500000" in "kube-system" namespace to be "Ready" ...
	I0821 03:34:43.542039    1442 pod_ready.go:92] pod "kube-apiserver-addons-500000" in "kube-system" namespace has status "Ready":"True"
	I0821 03:34:43.542045    1442 pod_ready.go:81] duration metric: took 2.276584ms waiting for pod "kube-apiserver-addons-500000" in "kube-system" namespace to be "Ready" ...
	I0821 03:34:43.542049    1442 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-500000" in "kube-system" namespace to be "Ready" ...
	I0821 03:34:43.544341    1442 pod_ready.go:92] pod "kube-controller-manager-addons-500000" in "kube-system" namespace has status "Ready":"True"
	I0821 03:34:43.544345    1442 pod_ready.go:81] duration metric: took 2.2935ms waiting for pod "kube-controller-manager-addons-500000" in "kube-system" namespace to be "Ready" ...
	I0821 03:34:43.544348    1442 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-z2wj9" in "kube-system" namespace to be "Ready" ...
	I0821 03:34:43.933736    1442 pod_ready.go:92] pod "kube-proxy-z2wj9" in "kube-system" namespace has status "Ready":"True"
	I0821 03:34:43.933748    1442 pod_ready.go:81] duration metric: took 389.407375ms waiting for pod "kube-proxy-z2wj9" in "kube-system" namespace to be "Ready" ...
	I0821 03:34:43.933752    1442 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-500000" in "kube-system" namespace to be "Ready" ...
	I0821 03:34:43.990470    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:44.334535    1442 pod_ready.go:92] pod "kube-scheduler-addons-500000" in "kube-system" namespace has status "Ready":"True"
	I0821 03:34:44.334545    1442 pod_ready.go:81] duration metric: took 400.801125ms waiting for pod "kube-scheduler-addons-500000" in "kube-system" namespace to be "Ready" ...
	I0821 03:34:44.334549    1442 pod_ready.go:38] duration metric: took 10.81524225s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0821 03:34:44.334558    1442 api_server.go:52] waiting for apiserver process to appear ...
	I0821 03:34:44.334639    1442 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0821 03:34:44.339980    1442 api_server.go:72] duration metric: took 11.909098333s to wait for apiserver process to appear ...
	I0821 03:34:44.339987    1442 api_server.go:88] waiting for apiserver healthz status ...
	I0821 03:34:44.339993    1442 api_server.go:253] Checking apiserver healthz at https://192.168.105.2:8443/healthz ...
	I0821 03:34:44.344178    1442 api_server.go:279] https://192.168.105.2:8443/healthz returned 200:
	ok
	I0821 03:34:44.344920    1442 api_server.go:141] control plane version: v1.27.4
	I0821 03:34:44.344925    1442 api_server.go:131] duration metric: took 4.936ms to wait for apiserver health ...
	I0821 03:34:44.344929    1442 system_pods.go:43] waiting for kube-system pods to appear ...
	I0821 03:34:44.490452    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:44.535983    1442 system_pods.go:59] 8 kube-system pods found
	I0821 03:34:44.535991    1442 system_pods.go:61] "coredns-5d78c9869d-hbg44" [2212048e-385c-4235-ad14-1b9e4e812106] Running
	I0821 03:34:44.535994    1442 system_pods.go:61] "etcd-addons-500000" [dcde2eed-b2a3-4b2d-af51-14d42189714c] Running
	I0821 03:34:44.536011    1442 system_pods.go:61] "kube-apiserver-addons-500000" [a4c38aeb-a7ef-4239-ac34-2437f9c67d96] Running
	I0821 03:34:44.536015    1442 system_pods.go:61] "kube-controller-manager-addons-500000" [972b1e42-cd56-4f77-ad52-a1df2b79fdae] Running
	I0821 03:34:44.536018    1442 system_pods.go:61] "kube-proxy-z2wj9" [56cdd0e9-2b8f-476e-be08-a52381eecb16] Running
	I0821 03:34:44.536020    1442 system_pods.go:61] "kube-scheduler-addons-500000" [c2d2f1e5-45c6-48a9-990d-7e32d9d75976] Running
	I0821 03:34:44.536022    1442 system_pods.go:61] "snapshot-controller-75bbb956b9-4pgqh" [7452ce04-2fbb-4f7a-9e5f-87b8b577fc94] Running
	I0821 03:34:44.536025    1442 system_pods.go:61] "snapshot-controller-75bbb956b9-j9mkf" [dbd2a297-29a5-4435-8fb1-849d8ae91771] Running
	I0821 03:34:44.536028    1442 system_pods.go:74] duration metric: took 191.1015ms to wait for pod list to return data ...
	I0821 03:34:44.536033    1442 default_sa.go:34] waiting for default service account to be created ...
	I0821 03:34:44.734042    1442 default_sa.go:45] found service account: "default"
	I0821 03:34:44.734051    1442 default_sa.go:55] duration metric: took 198.020583ms for default service account to be created ...
	I0821 03:34:44.734055    1442 system_pods.go:116] waiting for k8s-apps to be running ...
	I0821 03:34:44.935348    1442 system_pods.go:86] 8 kube-system pods found
	I0821 03:34:44.935359    1442 system_pods.go:89] "coredns-5d78c9869d-hbg44" [2212048e-385c-4235-ad14-1b9e4e812106] Running
	I0821 03:34:44.935362    1442 system_pods.go:89] "etcd-addons-500000" [dcde2eed-b2a3-4b2d-af51-14d42189714c] Running
	I0821 03:34:44.935365    1442 system_pods.go:89] "kube-apiserver-addons-500000" [a4c38aeb-a7ef-4239-ac34-2437f9c67d96] Running
	I0821 03:34:44.935367    1442 system_pods.go:89] "kube-controller-manager-addons-500000" [972b1e42-cd56-4f77-ad52-a1df2b79fdae] Running
	I0821 03:34:44.935369    1442 system_pods.go:89] "kube-proxy-z2wj9" [56cdd0e9-2b8f-476e-be08-a52381eecb16] Running
	I0821 03:34:44.935372    1442 system_pods.go:89] "kube-scheduler-addons-500000" [c2d2f1e5-45c6-48a9-990d-7e32d9d75976] Running
	I0821 03:34:44.935374    1442 system_pods.go:89] "snapshot-controller-75bbb956b9-4pgqh" [7452ce04-2fbb-4f7a-9e5f-87b8b577fc94] Running
	I0821 03:34:44.935376    1442 system_pods.go:89] "snapshot-controller-75bbb956b9-j9mkf" [dbd2a297-29a5-4435-8fb1-849d8ae91771] Running
	I0821 03:34:44.935380    1442 system_pods.go:126] duration metric: took 201.327917ms to wait for k8s-apps to be running ...
	I0821 03:34:44.935391    1442 system_svc.go:44] waiting for kubelet service to be running ....
	I0821 03:34:44.935475    1442 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0821 03:34:44.941643    1442 system_svc.go:56] duration metric: took 6.252209ms WaitForService to wait for kubelet.
	I0821 03:34:44.941651    1442 kubeadm.go:581] duration metric: took 12.5107865s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0821 03:34:44.941660    1442 node_conditions.go:102] verifying NodePressure condition ...
	I0821 03:34:44.990746    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:45.134674    1442 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0821 03:34:45.134706    1442 node_conditions.go:123] node cpu capacity is 2
	I0821 03:34:45.134712    1442 node_conditions.go:105] duration metric: took 193.055083ms to run NodePressure ...
	I0821 03:34:45.134717    1442 start.go:228] waiting for startup goroutines ...
	I0821 03:34:45.490470    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:45.990643    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:46.490327    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:46.990587    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:47.490536    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:47.990358    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:48.490279    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:48.990490    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:49.490328    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:49.990414    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:50.490337    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:50.990260    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:51.490639    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:51.989843    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:52.490813    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:52.990112    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:53.491005    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:53.992627    1442 kapi.go:107] duration metric: took 20.017033875s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0821 03:40:32.405313    1442 kapi.go:107] duration metric: took 6m0.010490834s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	W0821 03:40:32.405643    1442 out.go:239] ! Enabling 'csi-hostpath-driver' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=csi-hostpath-driver pods: context deadline exceeded]
	I0821 03:40:32.421828    1442 kapi.go:107] duration metric: took 6m0.009978583s to wait for kubernetes.io/minikube-addons=registry ...
	W0821 03:40:32.421921    1442 out.go:239] ! Enabling 'registry' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=registry pods: context deadline exceeded]
	I0821 03:40:32.430174    1442 out.go:177] * Enabled addons: storage-provisioner, cloud-spanner, metrics-server, ingress-dns, inspektor-gadget, default-storageclass, volumesnapshots, gcp-auth, ingress
	I0821 03:40:32.437176    1442 addons.go:502] enable addons completed in 6m0.058033333s: enabled=[storage-provisioner cloud-spanner metrics-server ingress-dns inspektor-gadget default-storageclass volumesnapshots gcp-auth ingress]
	I0821 03:40:32.437214    1442 start.go:233] waiting for cluster config update ...
	I0821 03:40:32.437252    1442 start.go:242] writing updated cluster config ...
	I0821 03:40:32.438394    1442 ssh_runner.go:195] Run: rm -f paused
	I0821 03:40:32.505190    1442 start.go:600] kubectl: 1.27.2, cluster: 1.27.4 (minor skew: 0)
	I0821 03:40:32.509248    1442 out.go:177] * Done! kubectl is now configured to use "addons-500000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Mon 2023-08-21 10:34:00 UTC, ends at Mon 2023-08-21 10:52:33 UTC. --
	Aug 21 10:34:41 addons-500000 dockerd[1153]: time="2023-08-21T10:34:41.956624254Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 21 10:34:42 addons-500000 cri-dockerd[1049]: time="2023-08-21T10:34:42Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/bbb4a4c960656b62bb19b9b067c655ea39e12d8756d8701729b8421b997616a1/resolv.conf as [nameserver 10.96.0.10 search ingress-nginx.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Aug 21 10:34:42 addons-500000 cri-dockerd[1049]: time="2023-08-21T10:34:42Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf: Status: Downloaded newer image for gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf"
	Aug 21 10:34:42 addons-500000 dockerd[1148]: time="2023-08-21T10:34:42.514519077Z" level=warning msg="reference for unknown type: " digest="sha256:e5c4824e7375fcf2a393e1c03c293b69759af37a9ca6abdb91b13d78a93da8bd" remote="registry.k8s.io/ingress-nginx/controller@sha256:e5c4824e7375fcf2a393e1c03c293b69759af37a9ca6abdb91b13d78a93da8bd"
	Aug 21 10:34:42 addons-500000 dockerd[1153]: time="2023-08-21T10:34:42.565577154Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 21 10:34:42 addons-500000 dockerd[1153]: time="2023-08-21T10:34:42.565634689Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 21 10:34:42 addons-500000 dockerd[1153]: time="2023-08-21T10:34:42.565652592Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 21 10:34:42 addons-500000 dockerd[1153]: time="2023-08-21T10:34:42.565663687Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 21 10:34:43 addons-500000 dockerd[1153]: time="2023-08-21T10:34:43.460515395Z" level=info msg="shim disconnected" id=d9032391cb53f0fa8cfd4e1696eef2d7eb7096ba08423fd5087bb7b4d2fba5ed namespace=moby
	Aug 21 10:34:43 addons-500000 dockerd[1153]: time="2023-08-21T10:34:43.460544530Z" level=warning msg="cleaning up after shim disconnected" id=d9032391cb53f0fa8cfd4e1696eef2d7eb7096ba08423fd5087bb7b4d2fba5ed namespace=moby
	Aug 21 10:34:43 addons-500000 dockerd[1153]: time="2023-08-21T10:34:43.460548812Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 21 10:34:43 addons-500000 dockerd[1148]: time="2023-08-21T10:34:43.460463883Z" level=info msg="ignoring event" container=d9032391cb53f0fa8cfd4e1696eef2d7eb7096ba08423fd5087bb7b4d2fba5ed module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 21 10:34:43 addons-500000 dockerd[1153]: time="2023-08-21T10:34:43.550734250Z" level=info msg="shim disconnected" id=3c57b48b5f08f4ead2c53d0b29e10a8a3dc35318069e85faa762b9ff0597901d namespace=moby
	Aug 21 10:34:43 addons-500000 dockerd[1148]: time="2023-08-21T10:34:43.550868047Z" level=info msg="ignoring event" container=3c57b48b5f08f4ead2c53d0b29e10a8a3dc35318069e85faa762b9ff0597901d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 21 10:34:43 addons-500000 dockerd[1153]: time="2023-08-21T10:34:43.550901548Z" level=warning msg="cleaning up after shim disconnected" id=3c57b48b5f08f4ead2c53d0b29e10a8a3dc35318069e85faa762b9ff0597901d namespace=moby
	Aug 21 10:34:43 addons-500000 dockerd[1153]: time="2023-08-21T10:34:43.550916158Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 21 10:34:52 addons-500000 cri-dockerd[1049]: time="2023-08-21T10:34:52Z" level=info msg="Pulling image registry.k8s.io/ingress-nginx/controller:v1.8.1@sha256:e5c4824e7375fcf2a393e1c03c293b69759af37a9ca6abdb91b13d78a93da8bd: df2bdb71e370: Extracting [=====================================>             ]  8.782MB/11.56MB"
	Aug 21 10:34:52 addons-500000 dockerd[1148]: time="2023-08-21T10:34:52.972147755Z" level=warning msg="ignored xattrs in archive: underlying filesystem doesn't support them" errors="[operation not supported]"
	Aug 21 10:34:52 addons-500000 dockerd[1148]: time="2023-08-21T10:34:52.973540499Z" level=warning msg="ignored xattrs in archive: underlying filesystem doesn't support them" errors="[operation not supported]"
	Aug 21 10:34:53 addons-500000 dockerd[1148]: time="2023-08-21T10:34:53.079609792Z" level=warning msg="ignored xattrs in archive: underlying filesystem doesn't support them" errors="[operation not supported]"
	Aug 21 10:34:53 addons-500000 cri-dockerd[1049]: time="2023-08-21T10:34:53Z" level=info msg="Stop pulling image registry.k8s.io/ingress-nginx/controller:v1.8.1@sha256:e5c4824e7375fcf2a393e1c03c293b69759af37a9ca6abdb91b13d78a93da8bd: Status: Downloaded newer image for registry.k8s.io/ingress-nginx/controller@sha256:e5c4824e7375fcf2a393e1c03c293b69759af37a9ca6abdb91b13d78a93da8bd"
	Aug 21 10:34:53 addons-500000 dockerd[1153]: time="2023-08-21T10:34:53.201046831Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 21 10:34:53 addons-500000 dockerd[1153]: time="2023-08-21T10:34:53.201094050Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 21 10:34:53 addons-500000 dockerd[1153]: time="2023-08-21T10:34:53.201110708Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 21 10:34:53 addons-500000 dockerd[1153]: time="2023-08-21T10:34:53.201117263Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                         ATTEMPT             POD ID
	734d7d69c9e8b       registry.k8s.io/ingress-nginx/controller@sha256:e5c4824e7375fcf2a393e1c03c293b69759af37a9ca6abdb91b13d78a93da8bd             17 minutes ago      Running             controller                   0                   bbb4a4c960656
	dbe5746b118a6       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf                 17 minutes ago      Running             gcp-auth                     0                   31154fc41fc35
	fc5767357c5d9       8f2588812ab29                                                                                                                17 minutes ago      Exited              patch                        1                   0538e79b5c883
	aa7d89a7d68d0       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:543c40fd093964bc9ab509d3e791f9989963021f1e9e4c9c7b6700b02bfb227b   17 minutes ago      Exited              create                       0                   3c078f4b9885e
	7979593c9bb52       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280      17 minutes ago      Running             volume-snapshot-controller   0                   70a68685a69fb
	fe9609fabef21       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280      17 minutes ago      Running             volume-snapshot-controller   0                   39eda7944d576
	16cfb4c805080       97e04611ad434                                                                                                                18 minutes ago      Running             coredns                      0                   b6fa8f87ea743
	36558206e7ebf       532e5a30e948f                                                                                                                18 minutes ago      Running             kube-proxy                   0                   ccc8633d52ca6
	bd48baf71b163       6eb63895cb67f                                                                                                                18 minutes ago      Running             kube-scheduler               0                   65c9ea48d27ae
	27dc2c0d7a4a5       24bc64e911039                                                                                                                18 minutes ago      Running             etcd                         0                   0f2cdc52bbda6
	dc949a6ce14c1       64aece92d6bde                                                                                                                18 minutes ago      Running             kube-apiserver               0                   090daa0e10080
	41982c5e9fc8f       389f6f052cf83                                                                                                                18 minutes ago      Running             kube-controller-manager      0                   a9c3d15b86bf8
	
	* 
	* ==> controller_ingress [734d7d69c9e8] <==
	*   Build:         dc88dce9ea5e700f3301d16f971fa17c6cfe757d
	  Repository:    https://github.com/kubernetes/ingress-nginx
	  nginx version: nginx/1.21.6
	
	-------------------------------------------------------------------------------
	
	W0821 10:34:53.255429       6 client_config.go:618] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
	I0821 10:34:53.255517       6 main.go:209] "Creating API client" host="https://10.96.0.1:443"
	I0821 10:34:53.259720       6 main.go:253] "Running in Kubernetes cluster" major="1" minor="27" git="v1.27.4" state="clean" commit="fa3d7990104d7c1f16943a67f11b154b71f6a132" platform="linux/arm64"
	I0821 10:34:53.370154       6 main.go:104] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
	I0821 10:34:53.376568       6 ssl.go:533] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
	I0821 10:34:53.385083       6 nginx.go:261] "Starting NGINX Ingress controller"
	I0821 10:34:53.389190       6 event.go:285] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"5b999e5a-759f-47c2-858b-4e3d79b34cbe", APIVersion:"v1", ResourceVersion:"433", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
	I0821 10:34:53.391567       6 event.go:285] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"a91d48bb-075d-496f-a947-fa3bf3c2ef7e", APIVersion:"v1", ResourceVersion:"434", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
	I0821 10:34:53.391592       6 event.go:285] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"5124232c-77f2-4a7f-a11f-9600873ca980", APIVersion:"v1", ResourceVersion:"435", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
	I0821 10:34:54.586254       6 nginx.go:304] "Starting NGINX process"
	I0821 10:34:54.586524       6 leaderelection.go:248] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
	I0821 10:34:54.587191       6 nginx.go:324] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
	I0821 10:34:54.588124       6 controller.go:190] "Configuration changes detected, backend reload required"
	I0821 10:34:54.605898       6 leaderelection.go:258] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I0821 10:34:54.606668       6 status.go:84] "New leader elected" identity="ingress-nginx-controller-7799c6795f-4ppd9"
	I0821 10:34:54.622098       6 status.go:215] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-7799c6795f-4ppd9" node="addons-500000"
	I0821 10:34:54.663825       6 controller.go:207] "Backend successfully reloaded"
	I0821 10:34:54.663941       6 controller.go:218] "Initial sync, sleeping for 1 second"
	I0821 10:34:54.664013       6 event.go:285] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7799c6795f-4ppd9", UID:"c950764c-9601-4c76-adb3-ddb61bd6335d", APIVersion:"v1", ResourceVersion:"458", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	
	* 
	* ==> coredns [16cfb4c80508] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	[INFO] Reloading complete
	[INFO] 127.0.0.1:52450 - 49271 "HINFO IN 1467224369207536570.5830207891825585757. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.005303742s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-500000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-500000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43
	                    minikube.k8s.io/name=addons-500000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_08_21T03_34_19_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 21 Aug 2023 10:34:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-500000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 21 Aug 2023 10:52:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 21 Aug 2023 10:50:40 +0000   Mon, 21 Aug 2023 10:34:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 21 Aug 2023 10:50:40 +0000   Mon, 21 Aug 2023 10:34:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 21 Aug 2023 10:50:40 +0000   Mon, 21 Aug 2023 10:34:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 21 Aug 2023 10:50:40 +0000   Mon, 21 Aug 2023 10:34:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.2
	  Hostname:    addons-500000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	System Info:
	  Machine ID:                 0e4a1f71467c44c8a10eca186773afe2
	  System UUID:                0e4a1f71467c44c8a10eca186773afe2
	  Boot ID:                    6d5e7ffc-fb7d-41fe-b076-69fd8535d300
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.4
	  Kubelet Version:            v1.27.4
	  Kube-Proxy Version:         v1.27.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  gcp-auth                    gcp-auth-58478865f7-zcg47                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  ingress-nginx               ingress-nginx-controller-7799c6795f-4ppd9    100m (5%!)(MISSING)     0 (0%!)(MISSING)      90Mi (2%!)(MISSING)        0 (0%!)(MISSING)         18m
	  kube-system                 coredns-5d78c9869d-hbg44                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     18m
	  kube-system                 etcd-addons-500000                           100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         18m
	  kube-system                 kube-apiserver-addons-500000                 250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-controller-manager-addons-500000        200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-proxy-z2wj9                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-scheduler-addons-500000                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 snapshot-controller-75bbb956b9-4pgqh         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 snapshot-controller-75bbb956b9-j9mkf         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  0 (0%!)(MISSING)
	  memory             260Mi (6%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 18m   kube-proxy       
	  Normal  Starting                 18m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  18m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  18m   kubelet          Node addons-500000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m   kubelet          Node addons-500000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m   kubelet          Node addons-500000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                18m   kubelet          Node addons-500000 status is now: NodeReady
	  Normal  RegisteredNode           18m   node-controller  Node addons-500000 event: Registered Node addons-500000 in Controller
	
	* 
	* ==> dmesg <==
	* [Aug21 10:33] ACPI: SRAT not present
	[  +0.000000] KASLR disabled due to lack of seed
	[  +0.638012] EINJ: EINJ table not found.
	[  +0.490829] systemd-fstab-generator[117]: Ignoring "noauto" for root device
	[  +0.044680] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000871] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[Aug21 10:34] systemd-fstab-generator[479]: Ignoring "noauto" for root device
	[  +0.063431] systemd-fstab-generator[490]: Ignoring "noauto" for root device
	[  +0.413293] systemd-fstab-generator[750]: Ignoring "noauto" for root device
	[  +0.194883] systemd-fstab-generator[786]: Ignoring "noauto" for root device
	[  +0.079334] systemd-fstab-generator[797]: Ignoring "noauto" for root device
	[  +0.075319] systemd-fstab-generator[810]: Ignoring "noauto" for root device
	[  +1.241580] systemd-fstab-generator[968]: Ignoring "noauto" for root device
	[  +0.080868] systemd-fstab-generator[979]: Ignoring "noauto" for root device
	[  +0.070572] systemd-fstab-generator[990]: Ignoring "noauto" for root device
	[  +0.067357] systemd-fstab-generator[1001]: Ignoring "noauto" for root device
	[  +0.069942] systemd-fstab-generator[1042]: Ignoring "noauto" for root device
	[  +2.503453] systemd-fstab-generator[1141]: Ignoring "noauto" for root device
	[  +2.381640] kauditd_printk_skb: 53 callbacks suppressed
	[  +2.661766] systemd-fstab-generator[1457]: Ignoring "noauto" for root device
	[  +5.156537] systemd-fstab-generator[2350]: Ignoring "noauto" for root device
	[ +13.738428] kauditd_printk_skb: 41 callbacks suppressed
	[  +1.700338] TCP: eth0: Driver has suspect GRO implementation, TCP performance may be compromised.
	[  +4.800757] kauditd_printk_skb: 48 callbacks suppressed
	[ +14.143799] kauditd_printk_skb: 54 callbacks suppressed
	
	* 
	* ==> etcd [27dc2c0d7a4a] <==
	* {"level":"info","ts":"2023-08-21T10:34:15.516Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6e03e7863b4f9c54","local-member-id":"c46d288d2fcb0590","added-peer-id":"c46d288d2fcb0590","added-peer-peer-urls":["https://192.168.105.2:2380"]}
	{"level":"info","ts":"2023-08-21T10:34:15.986Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 is starting a new election at term 1"}
	{"level":"info","ts":"2023-08-21T10:34:15.986Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-08-21T10:34:15.986Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 received MsgPreVoteResp from c46d288d2fcb0590 at term 1"}
	{"level":"info","ts":"2023-08-21T10:34:15.986Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 became candidate at term 2"}
	{"level":"info","ts":"2023-08-21T10:34:15.986Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 received MsgVoteResp from c46d288d2fcb0590 at term 2"}
	{"level":"info","ts":"2023-08-21T10:34:15.986Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 became leader at term 2"}
	{"level":"info","ts":"2023-08-21T10:34:15.986Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c46d288d2fcb0590 elected leader c46d288d2fcb0590 at term 2"}
	{"level":"info","ts":"2023-08-21T10:34:15.991Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"c46d288d2fcb0590","local-member-attributes":"{Name:addons-500000 ClientURLs:[https://192.168.105.2:2379]}","request-path":"/0/members/c46d288d2fcb0590/attributes","cluster-id":"6e03e7863b4f9c54","publish-timeout":"7s"}
	{"level":"info","ts":"2023-08-21T10:34:15.991Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-21T10:34:15.991Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-08-21T10:34:15.991Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-08-21T10:34:15.992Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-08-21T10:34:16.003Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-08-21T10:34:15.992Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-08-21T10:34:16.003Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.105.2:2379"}
	{"level":"info","ts":"2023-08-21T10:34:15.992Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6e03e7863b4f9c54","local-member-id":"c46d288d2fcb0590","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-21T10:34:16.003Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-21T10:34:16.003Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-21T10:44:16.025Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":841}
	{"level":"info","ts":"2023-08-21T10:44:16.028Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":841,"took":"2.672822ms","hash":3376273956}
	{"level":"info","ts":"2023-08-21T10:44:16.028Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3376273956,"revision":841,"compact-revision":-1}
	{"level":"info","ts":"2023-08-21T10:49:16.035Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1031}
	{"level":"info","ts":"2023-08-21T10:49:16.038Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1031,"took":"1.375633ms","hash":1895539758}
	{"level":"info","ts":"2023-08-21T10:49:16.038Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1895539758,"revision":1031,"compact-revision":841}
	
	* 
	* ==> gcp-auth [dbe5746b118a] <==
	* 2023/08/21 10:34:42 GCP Auth Webhook started!
	
	* 
	* ==> kernel <==
	*  10:52:33 up 18 min,  0 users,  load average: 0.51, 0.35, 0.28
	Linux addons-500000 5.10.57 #1 SMP PREEMPT Fri Jul 14 22:49:12 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [dc949a6ce14c] <==
	* I0821 10:34:33.591571       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0821 10:34:33.605328       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0821 10:34:33.605346       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0821 10:34:33.791971       1 alloc.go:330] "allocated clusterIPs" service="ingress-nginx/ingress-nginx-controller" clusterIPs=map[IPv4:10.105.18.225]
	I0821 10:34:33.798324       1 alloc.go:330] "allocated clusterIPs" service="ingress-nginx/ingress-nginx-controller-admission" clusterIPs=map[IPv4:10.101.255.12]
	I0821 10:34:33.819925       1 controller.go:624] quota admission added evaluator for: jobs.batch
	I0821 10:34:39.583629       1 alloc.go:330] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs=map[IPv4:10.110.39.22]
	I0821 10:39:16.746832       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0821 10:39:16.747262       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0821 10:39:16.747727       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0821 10:39:16.747921       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0821 10:39:16.759280       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0821 10:39:16.759360       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0821 10:44:16.754789       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0821 10:44:16.754844       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0821 10:44:16.754880       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0821 10:44:16.754904       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0821 10:44:16.755317       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0821 10:44:16.755352       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0821 10:49:16.748790       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0821 10:49:16.749408       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0821 10:49:16.759393       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0821 10:49:16.759510       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0821 10:49:16.766063       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0821 10:49:16.766169       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	
	* 
	* ==> kube-controller-manager [41982c5e9fc8] <==
	* I0821 10:34:42.731971       1 job_controller.go:523] enqueueing job ingress-nginx/ingress-nginx-admission-patch
	I0821 10:34:42.736066       1 job_controller.go:523] enqueueing job ingress-nginx/ingress-nginx-admission-patch
	I0821 10:34:42.737082       1 event.go:307] "Event occurred" object="ingress-nginx/ingress-nginx-admission-patch" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0821 10:34:42.747456       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0821 10:34:42.752783       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0821 10:34:42.756485       1 job_controller.go:523] enqueueing job ingress-nginx/ingress-nginx-admission-patch
	I0821 10:34:42.854473       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0821 10:34:42.856753       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0821 10:34:42.858553       1 event.go:307] "Event occurred" object="gcp-auth/gcp-auth-certs-patch" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0821 10:34:42.858609       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0821 10:34:42.859646       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0821 10:34:42.893612       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0821 10:34:42.895861       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0821 10:34:42.897862       1 event.go:307] "Event occurred" object="gcp-auth/gcp-auth-certs-create" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0821 10:34:42.897954       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0821 10:34:42.899189       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0821 10:35:01.688712       1 resource_quota_monitor.go:223] "QuotaMonitor created object count evaluator" resource="volumesnapshots.snapshot.storage.k8s.io"
	I0821 10:35:01.688853       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0821 10:35:01.789717       1 shared_informer.go:318] Caches are synced for resource quota
	I0821 10:35:02.109377       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0821 10:35:02.210585       1 shared_informer.go:318] Caches are synced for garbage collector
	I0821 10:35:12.010356       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0821 10:35:12.011197       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0821 10:35:12.022044       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0821 10:35:12.024702       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	
	* 
	* ==> kube-proxy [36558206e7eb] <==
	* I0821 10:34:32.961845       1 node.go:141] Successfully retrieved node IP: 192.168.105.2
	I0821 10:34:32.961903       1 server_others.go:110] "Detected node IP" address="192.168.105.2"
	I0821 10:34:32.961922       1 server_others.go:554] "Using iptables proxy"
	I0821 10:34:32.984111       1 server_others.go:178] "kube-proxy running in single-stack mode: secondary ipFamily is not supported" ipFamily=IPv6
	I0821 10:34:32.984124       1 server_others.go:192] "Using iptables Proxier"
	I0821 10:34:32.984147       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0821 10:34:32.984347       1 server.go:658] "Version info" version="v1.27.4"
	I0821 10:34:32.984357       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0821 10:34:32.984958       1 config.go:315] "Starting node config controller"
	I0821 10:34:32.984965       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0821 10:34:32.985291       1 config.go:188] "Starting service config controller"
	I0821 10:34:32.985295       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0821 10:34:32.985301       1 config.go:97] "Starting endpoint slice config controller"
	I0821 10:34:32.985318       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0821 10:34:33.085576       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0821 10:34:33.085604       1 shared_informer.go:318] Caches are synced for node config
	I0821 10:34:33.085608       1 shared_informer.go:318] Caches are synced for service config
	
	* 
	* ==> kube-scheduler [bd48baf71b16] <==
	* W0821 10:34:16.768490       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0821 10:34:16.768493       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0821 10:34:16.768508       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0821 10:34:16.768511       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0821 10:34:16.768562       1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0821 10:34:16.768566       1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0821 10:34:17.606010       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0821 10:34:17.606029       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0821 10:34:17.645166       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0821 10:34:17.645193       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0821 10:34:17.674598       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0821 10:34:17.674623       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0821 10:34:17.707767       1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0821 10:34:17.707781       1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0821 10:34:17.724040       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0821 10:34:17.724057       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0821 10:34:17.728085       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0821 10:34:17.728146       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0821 10:34:17.756871       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0821 10:34:17.756889       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0821 10:34:17.785527       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0821 10:34:17.785576       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0821 10:34:17.785527       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0821 10:34:17.785647       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0821 10:34:20.949364       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-08-21 10:34:00 UTC, ends at Mon 2023-08-21 10:52:33 UTC. --
	Aug 21 10:47:19 addons-500000 kubelet[2369]: E0821 10:47:19.567490    2369 iptables.go:575] "Could not set up iptables canary" err=<
	Aug 21 10:47:19 addons-500000 kubelet[2369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 21 10:47:19 addons-500000 kubelet[2369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 21 10:47:19 addons-500000 kubelet[2369]:  > table=nat chain=KUBE-KUBELET-CANARY
	Aug 21 10:48:19 addons-500000 kubelet[2369]: E0821 10:48:19.564490    2369 iptables.go:575] "Could not set up iptables canary" err=<
	Aug 21 10:48:19 addons-500000 kubelet[2369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 21 10:48:19 addons-500000 kubelet[2369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 21 10:48:19 addons-500000 kubelet[2369]:  > table=nat chain=KUBE-KUBELET-CANARY
	Aug 21 10:49:19 addons-500000 kubelet[2369]: W0821 10:49:19.449586    2369 machine.go:65] Cannot read vendor id correctly, set empty.
	Aug 21 10:49:19 addons-500000 kubelet[2369]: E0821 10:49:19.565825    2369 iptables.go:575] "Could not set up iptables canary" err=<
	Aug 21 10:49:19 addons-500000 kubelet[2369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 21 10:49:19 addons-500000 kubelet[2369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 21 10:49:19 addons-500000 kubelet[2369]:  > table=nat chain=KUBE-KUBELET-CANARY
	Aug 21 10:50:19 addons-500000 kubelet[2369]: E0821 10:50:19.566360    2369 iptables.go:575] "Could not set up iptables canary" err=<
	Aug 21 10:50:19 addons-500000 kubelet[2369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 21 10:50:19 addons-500000 kubelet[2369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 21 10:50:19 addons-500000 kubelet[2369]:  > table=nat chain=KUBE-KUBELET-CANARY
	Aug 21 10:51:19 addons-500000 kubelet[2369]: E0821 10:51:19.566744    2369 iptables.go:575] "Could not set up iptables canary" err=<
	Aug 21 10:51:19 addons-500000 kubelet[2369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 21 10:51:19 addons-500000 kubelet[2369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 21 10:51:19 addons-500000 kubelet[2369]:  > table=nat chain=KUBE-KUBELET-CANARY
	Aug 21 10:52:19 addons-500000 kubelet[2369]: E0821 10:52:19.565301    2369 iptables.go:575] "Could not set up iptables canary" err=<
	Aug 21 10:52:19 addons-500000 kubelet[2369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 21 10:52:19 addons-500000 kubelet[2369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 21 10:52:19 addons-500000 kubelet[2369]:  > table=nat chain=KUBE-KUBELET-CANARY
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p addons-500000 -n addons-500000
helpers_test.go:261: (dbg) Run:  kubectl --context addons-500000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-cxgb2 ingress-nginx-admission-patch-fkwhp
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-500000 describe pod ingress-nginx-admission-create-cxgb2 ingress-nginx-admission-patch-fkwhp
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-500000 describe pod ingress-nginx-admission-create-cxgb2 ingress-nginx-admission-patch-fkwhp: exit status 1 (35.42825ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-cxgb2" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-fkwhp" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-500000 describe pod ingress-nginx-admission-create-cxgb2 ingress-nginx-admission-patch-fkwhp: exit status 1
--- FAIL: TestAddons/parallel/Registry (720.95s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (136.82s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:183: (dbg) Run:  kubectl --context addons-500000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:208: (dbg) Run:  kubectl --context addons-500000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context addons-500000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [ae965586-52f4-4c17-908f-204c947d0d36] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [ae965586-52f4-4c17-908f-204c947d0d36] Running
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.013528s
addons_test.go:238: (dbg) Run:  out/minikube-darwin-arm64 -p addons-500000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Run:  kubectl --context addons-500000 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-darwin-arm64 -p addons-500000 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.105.2
addons_test.go:273: (dbg) Non-zero exit: nslookup hello-john.test 192.168.105.2: exit status 1 (15.040660208s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	

                                                
                                                
-- /stdout --
addons_test.go:275: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.105.2" : exit status 1
addons_test.go:279: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p addons-500000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p addons-500000 addons disable ingress-dns --alsologtostderr -v=1: exit status 10 (1m43.005382208s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0821 04:02:52.258565    2225 out.go:296] Setting OutFile to fd 1 ...
	I0821 04:02:52.258823    2225 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 04:02:52.258827    2225 out.go:309] Setting ErrFile to fd 2...
	I0821 04:02:52.258830    2225 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 04:02:52.258979    2225 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17102-920/.minikube/bin
	I0821 04:02:52.259262    2225 addons.go:594] checking whether the cluster is paused
	I0821 04:02:52.259487    2225 config.go:182] Loaded profile config "addons-500000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0821 04:02:52.259499    2225 host.go:66] Checking if "addons-500000" exists ...
	I0821 04:02:52.260526    2225 ssh_runner.go:195] Run: systemctl --version
	I0821 04:02:52.260542    2225 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/id_rsa Username:docker}
	I0821 04:02:52.292258    2225 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0821 04:02:52.299131    2225 mustload.go:65] Loading cluster: addons-500000
	I0821 04:02:52.299250    2225 config.go:182] Loaded profile config "addons-500000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0821 04:02:52.299327    2225 config.go:182] Loaded profile config "addons-500000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0821 04:02:52.299336    2225 addons.go:69] Setting ingress-dns=false in profile "addons-500000"
	I0821 04:02:52.299341    2225 addons.go:231] Setting addon ingress-dns=false in "addons-500000"
	I0821 04:02:52.299356    2225 host.go:66] Checking if "addons-500000" exists ...
	I0821 04:02:52.300159    2225 addons.go:428] Removing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0821 04:02:52.300204    2225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl delete --ignore-not-found -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0821 04:02:52.300211    2225 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/id_rsa Username:docker}
	W0821 04:02:52.354895    2225 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl delete --ignore-not-found -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: the path "/etc/kubernetes/addons/ingress-dns-pod.yaml" does not exist
	I0821 04:02:52.354920    2225 retry.go:31] will retry after 293.452345ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl delete --ignore-not-found -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: the path "/etc/kubernetes/addons/ingress-dns-pod.yaml" does not exist
	I0821 04:02:52.650809    2225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl delete --force --ignore-not-found -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0821 04:02:52.715583    2225 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl delete --force --ignore-not-found -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: the path "/etc/kubernetes/addons/ingress-dns-pod.yaml" does not exist
	I0821 04:02:52.715611    2225 retry.go:31] will retry after 336.77024ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl delete --force --ignore-not-found -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: the path "/etc/kubernetes/addons/ingress-dns-pod.yaml" does not exist
	I0821 04:02:53.054795    2225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl delete --force --ignore-not-found -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0821 04:02:53.112818    2225 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl delete --force --ignore-not-found -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: the path "/etc/kubernetes/addons/ingress-dns-pod.yaml" does not exist
	I0821 04:02:53.112849    2225 retry.go:31] will retry after 543.621937ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl delete --force --ignore-not-found -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: the path "/etc/kubernetes/addons/ingress-dns-pod.yaml" does not exist
	I0821 04:02:53.658930    2225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl delete --force --ignore-not-found -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0821 04:02:53.726044    2225 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl delete --force --ignore-not-found -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: the path "/etc/kubernetes/addons/ingress-dns-pod.yaml" does not exist
	I0821 04:02:53.726074    2225 retry.go:31] will retry after 1.01766524s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl delete --force --ignore-not-found -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: the path "/etc/kubernetes/addons/ingress-dns-pod.yaml" does not exist
	I0821 04:02:54.745954    2225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl delete --force --ignore-not-found -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0821 04:02:54.791571    2225 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl delete --force --ignore-not-found -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: the path "/etc/kubernetes/addons/ingress-dns-pod.yaml" does not exist
	I0821 04:02:54.791592    2225 retry.go:31] will retry after 1.469038919s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl delete --force --ignore-not-found -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: the path "/etc/kubernetes/addons/ingress-dns-pod.yaml" does not exist
	I0821 04:02:56.263030    2225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl delete --force --ignore-not-found -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0821 04:02:56.327248    2225 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl delete --force --ignore-not-found -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: the path "/etc/kubernetes/addons/ingress-dns-pod.yaml" does not exist
	I0821 04:02:56.327269    2225 retry.go:31] will retry after 1.967849758s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl delete --force --ignore-not-found -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: the path "/etc/kubernetes/addons/ingress-dns-pod.yaml" does not exist
	I0821 04:02:58.297613    2225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl delete --force --ignore-not-found -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0821 04:02:58.366017    2225 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl delete --force --ignore-not-found -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: the path "/etc/kubernetes/addons/ingress-dns-pod.yaml" does not exist
	I0821 04:02:58.366042    2225 retry.go:31] will retry after 2.480235085s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl delete --force --ignore-not-found -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: the path "/etc/kubernetes/addons/ingress-dns-pod.yaml" does not exist
	I0821 04:03:00.848723    2225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl delete --force --ignore-not-found -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0821 04:03:00.937654    2225 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl delete --force --ignore-not-found -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: the path "/etc/kubernetes/addons/ingress-dns-pod.yaml" does not exist
	I0821 04:03:00.937677    2225 retry.go:31] will retry after 4.352151134s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl delete --force --ignore-not-found -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: the path "/etc/kubernetes/addons/ingress-dns-pod.yaml" does not exist
	I0821 04:03:05.292232    2225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl delete --force --ignore-not-found -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0821 04:03:05.345826    2225 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl delete --force --ignore-not-found -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: the path "/etc/kubernetes/addons/ingress-dns-pod.yaml" does not exist
	I0821 04:03:05.345843    2225 retry.go:31] will retry after 8.052554783s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl delete --force --ignore-not-found -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: the path "/etc/kubernetes/addons/ingress-dns-pod.yaml" does not exist
	I0821 04:03:13.400787    2225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl delete --force --ignore-not-found -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0821 04:03:13.459925    2225 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl delete --force --ignore-not-found -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: the path "/etc/kubernetes/addons/ingress-dns-pod.yaml" does not exist
	I0821 04:03:13.459948    2225 retry.go:31] will retry after 7.019095449s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl delete --force --ignore-not-found -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: the path "/etc/kubernetes/addons/ingress-dns-pod.yaml" does not exist
	I0821 04:03:20.481497    2225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl delete --force --ignore-not-found -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0821 04:03:20.550236    2225 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl delete --force --ignore-not-found -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: the path "/etc/kubernetes/addons/ingress-dns-pod.yaml" does not exist
	I0821 04:03:20.550254    2225 retry.go:31] will retry after 17.939270717s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl delete --force --ignore-not-found -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: the path "/etc/kubernetes/addons/ingress-dns-pod.yaml" does not exist
	I0821 04:03:38.491898    2225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl delete --force --ignore-not-found -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0821 04:03:38.559713    2225 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl delete --force --ignore-not-found -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: the path "/etc/kubernetes/addons/ingress-dns-pod.yaml" does not exist
	I0821 04:03:38.559731    2225 retry.go:31] will retry after 24.134590291s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl delete --force --ignore-not-found -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: the path "/etc/kubernetes/addons/ingress-dns-pod.yaml" does not exist
	I0821 04:04:02.696571    2225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl delete --force --ignore-not-found -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0821 04:04:02.752625    2225 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl delete --force --ignore-not-found -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: the path "/etc/kubernetes/addons/ingress-dns-pod.yaml" does not exist
	I0821 04:04:02.752641    2225 retry.go:31] will retry after 32.385372492s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl delete --force --ignore-not-found -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: the path "/etc/kubernetes/addons/ingress-dns-pod.yaml" does not exist
	I0821 04:04:35.139891    2225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl delete --force --ignore-not-found -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0821 04:04:35.180043    2225 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl delete --force --ignore-not-found -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: the path "/etc/kubernetes/addons/ingress-dns-pod.yaml" does not exist
	I0821 04:04:35.180063    2225 ssh_runner.go:146] rm: /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0821 04:04:35.185011    2225 addons.go:431] error removing /etc/kubernetes/addons/ingress-dns-pod.yaml; addon should still be disabled as expected
	I0821 04:04:35.190246    2225 out.go:177] 
	W0821 04:04:35.193247    2225 out.go:239] X Exiting due to MK_ADDON_DISABLE: disable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl delete --force --ignore-not-found -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: the path "/etc/kubernetes/addons/ingress-dns-pod.yaml" does not exist
	]
	X Exiting due to MK_ADDON_DISABLE: disable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl delete --force --ignore-not-found -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: the path "/etc/kubernetes/addons/ingress-dns-pod.yaml" does not exist
	]
	W0821 04:04:35.193254    2225 out.go:239] * 
	* 
	W0821 04:04:35.194463    2225 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0821 04:04:35.197164    2225 out.go:177] 

                                                
                                                
** /stderr **
addons_test.go:284: failed to disable ingress-dns addon. args "out/minikube-darwin-arm64 -p addons-500000 addons disable ingress-dns --alsologtostderr -v=1" : exit status 10
addons_test.go:287: (dbg) Run:  out/minikube-darwin-arm64 -p addons-500000 addons disable ingress --alsologtostderr -v=1
addons_test.go:287: (dbg) Done: out/minikube-darwin-arm64 -p addons-500000 addons disable ingress --alsologtostderr -v=1: (7.290481959s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p addons-500000 -n addons-500000
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p addons-500000 logs -n 25
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-670000 | jenkins | v1.31.2 | 21 Aug 23 03:33 PDT |                     |
	|         | -p download-only-670000           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=qemu2                    |                      |         |         |                     |                     |
	| start   | -o=json --download-only           | download-only-670000 | jenkins | v1.31.2 | 21 Aug 23 03:33 PDT |                     |
	|         | -p download-only-670000           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.4      |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=qemu2                    |                      |         |         |                     |                     |
	| start   | -o=json --download-only           | download-only-670000 | jenkins | v1.31.2 | 21 Aug 23 03:33 PDT |                     |
	|         | -p download-only-670000           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.0-rc.1 |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=qemu2                    |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.31.2 | 21 Aug 23 03:33 PDT | 21 Aug 23 03:33 PDT |
	| delete  | -p download-only-670000           | download-only-670000 | jenkins | v1.31.2 | 21 Aug 23 03:33 PDT | 21 Aug 23 03:33 PDT |
	| delete  | -p download-only-670000           | download-only-670000 | jenkins | v1.31.2 | 21 Aug 23 03:33 PDT | 21 Aug 23 03:33 PDT |
	| start   | --download-only -p                | binary-mirror-462000 | jenkins | v1.31.2 | 21 Aug 23 03:33 PDT |                     |
	|         | binary-mirror-462000              |                      |         |         |                     |                     |
	|         | --alsologtostderr                 |                      |         |         |                     |                     |
	|         | --binary-mirror                   |                      |         |         |                     |                     |
	|         | http://127.0.0.1:49329            |                      |         |         |                     |                     |
	|         | --driver=qemu2                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-462000           | binary-mirror-462000 | jenkins | v1.31.2 | 21 Aug 23 03:33 PDT | 21 Aug 23 03:33 PDT |
	| start   | -p addons-500000                  | addons-500000        | jenkins | v1.31.2 | 21 Aug 23 03:33 PDT | 21 Aug 23 03:40 PDT |
	|         | --wait=true --memory=4000         |                      |         |         |                     |                     |
	|         | --alsologtostderr                 |                      |         |         |                     |                     |
	|         | --addons=registry                 |                      |         |         |                     |                     |
	|         | --addons=metrics-server           |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots          |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver      |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                 |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner            |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget         |                      |         |         |                     |                     |
	|         | --driver=qemu2                    |                      |         |         |                     |                     |
	|         | --addons=ingress                  |                      |         |         |                     |                     |
	|         | --addons=ingress-dns              |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p          | addons-500000        | jenkins | v1.31.2 | 21 Aug 23 03:52 PDT |                     |
	|         | addons-500000                     |                      |         |         |                     |                     |
	| ssh     | addons-500000 ssh curl -s         | addons-500000        | jenkins | v1.31.2 | 21 Aug 23 04:02 PDT | 21 Aug 23 04:02 PDT |
	|         | http://127.0.0.1/ -H 'Host:       |                      |         |         |                     |                     |
	|         | nginx.example.com'                |                      |         |         |                     |                     |
	| ip      | addons-500000 ip                  | addons-500000        | jenkins | v1.31.2 | 21 Aug 23 04:02 PDT | 21 Aug 23 04:02 PDT |
	| addons  | addons-500000 addons disable      | addons-500000        | jenkins | v1.31.2 | 21 Aug 23 04:02 PDT |                     |
	|         | ingress-dns --alsologtostderr     |                      |         |         |                     |                     |
	|         | -v=1                              |                      |         |         |                     |                     |
	| addons  | enable headlamp                   | addons-500000        | jenkins | v1.31.2 | 21 Aug 23 04:04 PDT | 21 Aug 23 04:04 PDT |
	|         | -p addons-500000                  |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1            |                      |         |         |                     |                     |
	| addons  | addons-500000 addons disable      | addons-500000        | jenkins | v1.31.2 | 21 Aug 23 04:04 PDT | 21 Aug 23 04:04 PDT |
	|         | ingress --alsologtostderr -v=1    |                      |         |         |                     |                     |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/21 03:33:48
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.20.7 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0821 03:33:48.415064    1442 out.go:296] Setting OutFile to fd 1 ...
	I0821 03:33:48.415176    1442 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 03:33:48.415179    1442 out.go:309] Setting ErrFile to fd 2...
	I0821 03:33:48.415182    1442 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 03:33:48.415284    1442 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17102-920/.minikube/bin
	I0821 03:33:48.416485    1442 out.go:303] Setting JSON to false
	I0821 03:33:48.431675    1442 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":202,"bootTime":1692613826,"procs":392,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0821 03:33:48.431757    1442 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0821 03:33:48.436776    1442 out.go:177] * [addons-500000] minikube v1.31.2 on Darwin 13.5 (arm64)
	I0821 03:33:48.443786    1442 out.go:177]   - MINIKUBE_LOCATION=17102
	I0821 03:33:48.443817    1442 notify.go:220] Checking for updates...
	I0821 03:33:48.452754    1442 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17102-920/kubeconfig
	I0821 03:33:48.459793    1442 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0821 03:33:48.466761    1442 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0821 03:33:48.469754    1442 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17102-920/.minikube
	I0821 03:33:48.472801    1442 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0821 03:33:48.476845    1442 driver.go:373] Setting default libvirt URI to qemu:///system
	I0821 03:33:48.479685    1442 out.go:177] * Using the qemu2 driver based on user configuration
	I0821 03:33:48.486794    1442 start.go:298] selected driver: qemu2
	I0821 03:33:48.486801    1442 start.go:902] validating driver "qemu2" against <nil>
	I0821 03:33:48.486809    1442 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0821 03:33:48.488928    1442 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0821 03:33:48.491687    1442 out.go:177] * Automatically selected the socket_vmnet network
	I0821 03:33:48.495787    1442 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0821 03:33:48.495806    1442 cni.go:84] Creating CNI manager for ""
	I0821 03:33:48.495814    1442 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0821 03:33:48.495818    1442 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0821 03:33:48.495823    1442 start_flags.go:319] config:
	{Name:addons-500000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:addons-500000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:c
ni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0821 03:33:48.500226    1442 iso.go:125] acquiring lock: {Name:mk813ea611542195bb0511881888be3fabc72ff6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 03:33:48.506762    1442 out.go:177] * Starting control plane node addons-500000 in cluster addons-500000
	I0821 03:33:48.510761    1442 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime docker
	I0821 03:33:48.510781    1442 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17102-920/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4
	I0821 03:33:48.510799    1442 cache.go:57] Caching tarball of preloaded images
	I0821 03:33:48.510861    1442 preload.go:174] Found /Users/jenkins/minikube-integration/17102-920/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0821 03:33:48.510867    1442 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.4 on docker
	I0821 03:33:48.511057    1442 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/config.json ...
	I0821 03:33:48.511069    1442 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/config.json: {Name:mke6ea6a330608889e821054234e4dab41e05376 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 03:33:48.511283    1442 start.go:365] acquiring machines lock for addons-500000: {Name:mk9b32d9fe994be32d77812db464b2cfa7bfb400 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0821 03:33:48.511397    1442 start.go:369] acquired machines lock for "addons-500000" in 109.25µs
	I0821 03:33:48.511409    1442 start.go:93] Provisioning new machine with config: &{Name:addons-500000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:
addons-500000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0821 03:33:48.511444    1442 start.go:125] createHost starting for "" (driver="qemu2")
	I0821 03:33:48.515777    1442 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0821 03:33:48.825711    1442 start.go:159] libmachine.API.Create for "addons-500000" (driver="qemu2")
	I0821 03:33:48.825759    1442 client.go:168] LocalClient.Create starting
	I0821 03:33:48.825907    1442 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca.pem
	I0821 03:33:48.926786    1442 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/17102-920/.minikube/certs/cert.pem
	I0821 03:33:49.005435    1442 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17102-920/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17102-920/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0821 03:33:49.429478    1442 main.go:141] libmachine: Creating SSH key...
	I0821 03:33:49.603069    1442 main.go:141] libmachine: Creating Disk image...
	I0821 03:33:49.603078    1442 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0821 03:33:49.603290    1442 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/disk.qcow2.raw /Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/disk.qcow2
	I0821 03:33:49.637224    1442 main.go:141] libmachine: STDOUT: 
	I0821 03:33:49.637249    1442 main.go:141] libmachine: STDERR: 
	I0821 03:33:49.637377    1442 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/disk.qcow2 +20000M
	I0821 03:33:49.644766    1442 main.go:141] libmachine: STDOUT: Image resized.
	
	I0821 03:33:49.644778    1442 main.go:141] libmachine: STDERR: 
	I0821 03:33:49.644801    1442 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/disk.qcow2
	I0821 03:33:49.644808    1442 main.go:141] libmachine: Starting QEMU VM...
	I0821 03:33:49.644850    1442 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:15:38:20:81:6d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/disk.qcow2
	I0821 03:33:49.712858    1442 main.go:141] libmachine: STDOUT: 
	I0821 03:33:49.712896    1442 main.go:141] libmachine: STDERR: 
	I0821 03:33:49.712900    1442 main.go:141] libmachine: Attempt 0
	I0821 03:33:49.712923    1442 main.go:141] libmachine: Searching for 5e:15:38:20:81:6d in /var/db/dhcpd_leases ...
	I0821 03:33:51.714037    1442 main.go:141] libmachine: Attempt 1
	I0821 03:33:51.714122    1442 main.go:141] libmachine: Searching for 5e:15:38:20:81:6d in /var/db/dhcpd_leases ...
	I0821 03:33:53.715339    1442 main.go:141] libmachine: Attempt 2
	I0821 03:33:53.715370    1442 main.go:141] libmachine: Searching for 5e:15:38:20:81:6d in /var/db/dhcpd_leases ...
	I0821 03:33:55.716394    1442 main.go:141] libmachine: Attempt 3
	I0821 03:33:55.716406    1442 main.go:141] libmachine: Searching for 5e:15:38:20:81:6d in /var/db/dhcpd_leases ...
	I0821 03:33:57.717443    1442 main.go:141] libmachine: Attempt 4
	I0821 03:33:57.717472    1442 main.go:141] libmachine: Searching for 5e:15:38:20:81:6d in /var/db/dhcpd_leases ...
	I0821 03:33:59.718558    1442 main.go:141] libmachine: Attempt 5
	I0821 03:33:59.718579    1442 main.go:141] libmachine: Searching for 5e:15:38:20:81:6d in /var/db/dhcpd_leases ...
	I0821 03:34:01.719634    1442 main.go:141] libmachine: Attempt 6
	I0821 03:34:01.719657    1442 main.go:141] libmachine: Searching for 5e:15:38:20:81:6d in /var/db/dhcpd_leases ...
	I0821 03:34:01.719810    1442 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0821 03:34:01.719849    1442 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:5e:15:38:20:81:6d ID:1,5e:15:38:20:81:6d Lease:0x64e48f18}
	I0821 03:34:01.719855    1442 main.go:141] libmachine: Found match: 5e:15:38:20:81:6d
	I0821 03:34:01.719867    1442 main.go:141] libmachine: IP: 192.168.105.2
	I0821 03:34:01.719873    1442 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.2)...
	I0821 03:34:03.738025    1442 machine.go:88] provisioning docker machine ...
	I0821 03:34:03.738086    1442 buildroot.go:166] provisioning hostname "addons-500000"
	I0821 03:34:03.739549    1442 main.go:141] libmachine: Using SSH client type: native
	I0821 03:34:03.740347    1442 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102aae1e0] 0x102ab0c40 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0821 03:34:03.740367    1442 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-500000 && echo "addons-500000" | sudo tee /etc/hostname
	I0821 03:34:03.826570    1442 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-500000
	
	I0821 03:34:03.826696    1442 main.go:141] libmachine: Using SSH client type: native
	I0821 03:34:03.827174    1442 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102aae1e0] 0x102ab0c40 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0821 03:34:03.827189    1442 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-500000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-500000/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-500000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0821 03:34:03.891757    1442 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0821 03:34:03.891772    1442 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17102-920/.minikube CaCertPath:/Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17102-920/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17102-920/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17102-920/.minikube}
	I0821 03:34:03.891782    1442 buildroot.go:174] setting up certificates
	I0821 03:34:03.891796    1442 provision.go:83] configureAuth start
	I0821 03:34:03.891801    1442 provision.go:138] copyHostCerts
	I0821 03:34:03.891982    1442 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17102-920/.minikube/ca.pem (1078 bytes)
	I0821 03:34:03.892356    1442 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17102-920/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17102-920/.minikube/cert.pem (1123 bytes)
	I0821 03:34:03.892494    1442 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17102-920/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17102-920/.minikube/key.pem (1679 bytes)
	I0821 03:34:03.892606    1442 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17102-920/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca-key.pem org=jenkins.addons-500000 san=[192.168.105.2 192.168.105.2 localhost 127.0.0.1 minikube addons-500000]
	I0821 03:34:04.055231    1442 provision.go:172] copyRemoteCerts
	I0821 03:34:04.055290    1442 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0821 03:34:04.055299    1442 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/id_rsa Username:docker}
	I0821 03:34:04.085022    1442 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0821 03:34:04.091757    1442 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0821 03:34:04.098302    1442 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0821 03:34:04.105297    1442 provision.go:86] duration metric: configureAuth took 213.489792ms
	I0821 03:34:04.105304    1442 buildroot.go:189] setting minikube options for container-runtime
	I0821 03:34:04.105410    1442 config.go:182] Loaded profile config "addons-500000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0821 03:34:04.105443    1442 main.go:141] libmachine: Using SSH client type: native
	I0821 03:34:04.105658    1442 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102aae1e0] 0x102ab0c40 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0821 03:34:04.105665    1442 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0821 03:34:04.160033    1442 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0821 03:34:04.160039    1442 buildroot.go:70] root file system type: tmpfs
	I0821 03:34:04.160095    1442 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0821 03:34:04.160145    1442 main.go:141] libmachine: Using SSH client type: native
	I0821 03:34:04.160376    1442 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102aae1e0] 0x102ab0c40 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0821 03:34:04.160410    1442 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0821 03:34:04.217511    1442 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0821 03:34:04.217555    1442 main.go:141] libmachine: Using SSH client type: native
	I0821 03:34:04.217777    1442 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102aae1e0] 0x102ab0c40 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0821 03:34:04.217788    1442 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0821 03:34:04.516566    1442 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0821 03:34:04.516576    1442 machine.go:91] provisioned docker machine in 778.543875ms
	I0821 03:34:04.516581    1442 client.go:171] LocalClient.Create took 15.691254833s
	I0821 03:34:04.516600    1442 start.go:167] duration metric: libmachine.API.Create for "addons-500000" took 15.691329875s
	I0821 03:34:04.516605    1442 start.go:300] post-start starting for "addons-500000" (driver="qemu2")
	I0821 03:34:04.516610    1442 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0821 03:34:04.516676    1442 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0821 03:34:04.516684    1442 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/id_rsa Username:docker}
	I0821 03:34:04.547645    1442 ssh_runner.go:195] Run: cat /etc/os-release
	I0821 03:34:04.548977    1442 info.go:137] Remote host: Buildroot 2021.02.12
	I0821 03:34:04.548988    1442 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17102-920/.minikube/addons for local assets ...
	I0821 03:34:04.549067    1442 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17102-920/.minikube/files for local assets ...
	I0821 03:34:04.549094    1442 start.go:303] post-start completed in 32.487208ms
	I0821 03:34:04.549503    1442 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/config.json ...
	I0821 03:34:04.549671    1442 start.go:128] duration metric: createHost completed in 16.038665083s
	I0821 03:34:04.549713    1442 main.go:141] libmachine: Using SSH client type: native
	I0821 03:34:04.549937    1442 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102aae1e0] 0x102ab0c40 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0821 03:34:04.549942    1442 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0821 03:34:04.603319    1442 main.go:141] libmachine: SSH cmd err, output: <nil>: 1692614044.503149419
	
	I0821 03:34:04.603325    1442 fix.go:206] guest clock: 1692614044.503149419
	I0821 03:34:04.603329    1442 fix.go:219] Guest: 2023-08-21 03:34:04.503149419 -0700 PDT Remote: 2023-08-21 03:34:04.549674 -0700 PDT m=+16.153755168 (delta=-46.524581ms)
	I0821 03:34:04.603340    1442 fix.go:190] guest clock delta is within tolerance: -46.524581ms
	I0821 03:34:04.603349    1442 start.go:83] releasing machines lock for "addons-500000", held for 16.092394834s
	I0821 03:34:04.603625    1442 ssh_runner.go:195] Run: cat /version.json
	I0821 03:34:04.603635    1442 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/id_rsa Username:docker}
	I0821 03:34:04.603639    1442 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0821 03:34:04.603685    1442 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/id_rsa Username:docker}
	I0821 03:34:04.631400    1442 ssh_runner.go:195] Run: systemctl --version
	I0821 03:34:04.633303    1442 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0821 03:34:04.675003    1442 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0821 03:34:04.675044    1442 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0821 03:34:04.680093    1442 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0821 03:34:04.680102    1442 start.go:466] detecting cgroup driver to use...
	I0821 03:34:04.680217    1442 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0821 03:34:04.685575    1442 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0821 03:34:04.689003    1442 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0821 03:34:04.692463    1442 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0821 03:34:04.692496    1442 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0821 03:34:04.695492    1442 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0821 03:34:04.698438    1442 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0821 03:34:04.701779    1442 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0821 03:34:04.705308    1442 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0821 03:34:04.708997    1442 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0821 03:34:04.712485    1442 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0821 03:34:04.715157    1442 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0821 03:34:04.718062    1442 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0821 03:34:04.801182    1442 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0821 03:34:04.809752    1442 start.go:466] detecting cgroup driver to use...
	I0821 03:34:04.809829    1442 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0821 03:34:04.815491    1442 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0821 03:34:04.820439    1442 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0821 03:34:04.826330    1442 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0821 03:34:04.831197    1442 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0821 03:34:04.835955    1442 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0821 03:34:04.893707    1442 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0821 03:34:04.899704    1442 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0821 03:34:04.905738    1442 ssh_runner.go:195] Run: which cri-dockerd
	I0821 03:34:04.907314    1442 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0821 03:34:04.910018    1442 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0821 03:34:04.915159    1442 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0821 03:34:04.993497    1442 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0821 03:34:05.073322    1442 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0821 03:34:05.073337    1442 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0821 03:34:05.078736    1442 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0821 03:34:05.148942    1442 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0821 03:34:06.310888    1442 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.161962625s)
	I0821 03:34:06.310946    1442 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0821 03:34:06.389910    1442 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0821 03:34:06.470512    1442 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0821 03:34:06.540771    1442 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0821 03:34:06.608028    1442 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0821 03:34:06.614951    1442 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0821 03:34:06.680856    1442 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0821 03:34:06.705016    1442 start.go:513] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0821 03:34:06.705100    1442 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0821 03:34:06.707492    1442 start.go:534] Will wait 60s for crictl version
	I0821 03:34:06.707526    1442 ssh_runner.go:195] Run: which crictl
	I0821 03:34:06.708906    1442 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0821 03:34:06.723485    1442 start.go:550] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.4
	RuntimeApiVersion:  v1alpha2
	I0821 03:34:06.723553    1442 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0821 03:34:06.733136    1442 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0821 03:34:06.752243    1442 out.go:204] * Preparing Kubernetes v1.27.4 on Docker 24.0.4 ...
	I0821 03:34:06.752395    1442 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0821 03:34:06.753728    1442 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0821 03:34:06.757671    1442 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime docker
	I0821 03:34:06.757717    1442 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0821 03:34:06.767699    1442 docker.go:636] Got preloaded images: 
	I0821 03:34:06.767706    1442 docker.go:642] registry.k8s.io/kube-apiserver:v1.27.4 wasn't preloaded
	I0821 03:34:06.767758    1442 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0821 03:34:06.770623    1442 ssh_runner.go:195] Run: which lz4
	I0821 03:34:06.772016    1442 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0821 03:34:06.773407    1442 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0821 03:34:06.773426    1442 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (343658271 bytes)
	I0821 03:34:08.065715    1442 docker.go:600] Took 1.293779 seconds to copy over tarball
	I0821 03:34:08.065776    1442 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0821 03:34:09.083194    1442 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.017432542s)
	I0821 03:34:09.083208    1442 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0821 03:34:09.098174    1442 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0821 03:34:09.101758    1442 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0821 03:34:09.107271    1442 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0821 03:34:09.185186    1442 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0821 03:34:11.583398    1442 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.398262792s)
	I0821 03:34:11.583497    1442 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0821 03:34:11.599112    1442 docker.go:636] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.4
	registry.k8s.io/kube-controller-manager:v1.27.4
	registry.k8s.io/kube-scheduler:v1.27.4
	registry.k8s.io/kube-proxy:v1.27.4
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0821 03:34:11.599121    1442 cache_images.go:84] Images are preloaded, skipping loading
	I0821 03:34:11.599173    1442 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0821 03:34:11.606813    1442 cni.go:84] Creating CNI manager for ""
	I0821 03:34:11.606822    1442 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0821 03:34:11.606852    1442 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0821 03:34:11.606862    1442 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.2 APIServerPort:8443 KubernetesVersion:v1.27.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-500000 NodeName:addons-500000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0821 03:34:11.606930    1442 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-500000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0821 03:34:11.606959    1442 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=addons-500000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.4 ClusterName:addons-500000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0821 03:34:11.607013    1442 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.4
	I0821 03:34:11.609958    1442 binaries.go:44] Found k8s binaries, skipping transfer
	I0821 03:34:11.609992    1442 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0821 03:34:11.613080    1442 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I0821 03:34:11.618135    1442 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0821 03:34:11.623217    1442 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2099 bytes)
	I0821 03:34:11.628067    1442 ssh_runner.go:195] Run: grep 192.168.105.2	control-plane.minikube.internal$ /etc/hosts
	I0821 03:34:11.629338    1442 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0821 03:34:11.633264    1442 certs.go:56] Setting up /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000 for IP: 192.168.105.2
	I0821 03:34:11.633272    1442 certs.go:190] acquiring lock for shared ca certs: {Name:mkaf8bee91c9bef113528e728629bac5c142d5d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 03:34:11.633419    1442 certs.go:204] generating minikubeCA CA: /Users/jenkins/minikube-integration/17102-920/.minikube/ca.key
	I0821 03:34:11.709497    1442 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17102-920/.minikube/ca.crt ...
	I0821 03:34:11.709504    1442 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17102-920/.minikube/ca.crt: {Name:mk11304afc04d282dffa1bbfafecb7763b86f0d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 03:34:11.709741    1442 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17102-920/.minikube/ca.key ...
	I0821 03:34:11.709747    1442 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17102-920/.minikube/ca.key: {Name:mk7632addcfceaabe09bce428c8dd59051132a6a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 03:34:11.709856    1442 certs.go:204] generating proxyClientCA CA: /Users/jenkins/minikube-integration/17102-920/.minikube/proxy-client-ca.key
	I0821 03:34:11.928292    1442 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17102-920/.minikube/proxy-client-ca.crt ...
	I0821 03:34:11.928298    1442 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17102-920/.minikube/proxy-client-ca.crt: {Name:mk59ba2d6f1e462ee2e456d21a76e6acaba82b70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 03:34:11.928531    1442 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17102-920/.minikube/proxy-client-ca.key ...
	I0821 03:34:11.928534    1442 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17102-920/.minikube/proxy-client-ca.key: {Name:mk02c96134c44ce7714696be07e0b5c22f58dc64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 03:34:11.928684    1442 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/client.key
	I0821 03:34:11.928691    1442 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/client.crt with IP's: []
	I0821 03:34:12.116170    1442 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/client.crt ...
	I0821 03:34:12.116177    1442 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/client.crt: {Name:mk3182b685506ec2dbfcad41054e3ffc2bf0f3b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 03:34:12.116379    1442 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/client.key ...
	I0821 03:34:12.116384    1442 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/client.key: {Name:mk087ee0a568a92e1e97ae6eb06dd6604454b2e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 03:34:12.116489    1442 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/apiserver.key.96055969
	I0821 03:34:12.116499    1442 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/apiserver.crt.96055969 with IP's: [192.168.105.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0821 03:34:12.174634    1442 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/apiserver.crt.96055969 ...
	I0821 03:34:12.174637    1442 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/apiserver.crt.96055969: {Name:mk02f137a3a75334a28e6811666f6d1dde47709c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 03:34:12.174771    1442 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/apiserver.key.96055969 ...
	I0821 03:34:12.174774    1442 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/apiserver.key.96055969: {Name:mk629f60ce1370d0aadb852a255428713cef631b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 03:34:12.174873    1442 certs.go:337] copying /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/apiserver.crt.96055969 -> /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/apiserver.crt
	I0821 03:34:12.175028    1442 certs.go:341] copying /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/apiserver.key.96055969 -> /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/apiserver.key
	I0821 03:34:12.175114    1442 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/proxy-client.key
	I0821 03:34:12.175123    1442 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/proxy-client.crt with IP's: []
	I0821 03:34:12.291172    1442 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/proxy-client.crt ...
	I0821 03:34:12.291175    1442 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/proxy-client.crt: {Name:mk4861ba5de37ed8d82543663b167ed0e04664dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 03:34:12.291331    1442 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/proxy-client.key ...
	I0821 03:34:12.291334    1442 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/proxy-client.key: {Name:mk5eb1fb206858f7f6262a3b86ec8673fdeb4399 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 03:34:12.291586    1442 certs.go:437] found cert: /Users/jenkins/minikube-integration/17102-920/.minikube/certs/Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca-key.pem (1679 bytes)
	I0821 03:34:12.291611    1442 certs.go:437] found cert: /Users/jenkins/minikube-integration/17102-920/.minikube/certs/Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca.pem (1078 bytes)
	I0821 03:34:12.291633    1442 certs.go:437] found cert: /Users/jenkins/minikube-integration/17102-920/.minikube/certs/Users/jenkins/minikube-integration/17102-920/.minikube/certs/cert.pem (1123 bytes)
	I0821 03:34:12.291654    1442 certs.go:437] found cert: /Users/jenkins/minikube-integration/17102-920/.minikube/certs/Users/jenkins/minikube-integration/17102-920/.minikube/certs/key.pem (1679 bytes)
	I0821 03:34:12.292029    1442 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0821 03:34:12.300489    1442 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0821 03:34:12.307765    1442 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0821 03:34:12.314499    1442 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0821 03:34:12.321449    1442 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0821 03:34:12.328965    1442 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0821 03:34:12.336085    1442 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0821 03:34:12.342676    1442 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0821 03:34:12.349529    1442 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0821 03:34:12.356907    1442 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0821 03:34:12.363000    1442 ssh_runner.go:195] Run: openssl version
	I0821 03:34:12.364943    1442 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0821 03:34:12.368659    1442 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0821 03:34:12.370316    1442 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 21 10:34 /usr/share/ca-certificates/minikubeCA.pem
	I0821 03:34:12.370337    1442 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0821 03:34:12.372170    1442 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0821 03:34:12.375051    1442 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0821 03:34:12.376254    1442 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0821 03:34:12.376292    1442 kubeadm.go:404] StartCluster: {Name:addons-500000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:addons-500000 Namespac
e:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mount
UID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0821 03:34:12.376353    1442 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0821 03:34:12.381765    1442 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0821 03:34:12.385127    1442 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0821 03:34:12.388050    1442 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0821 03:34:12.390699    1442 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0821 03:34:12.390714    1442 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0821 03:34:12.412358    1442 kubeadm.go:322] [init] Using Kubernetes version: v1.27.4
	I0821 03:34:12.412390    1442 kubeadm.go:322] [preflight] Running pre-flight checks
	I0821 03:34:12.465080    1442 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0821 03:34:12.465135    1442 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0821 03:34:12.465183    1442 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0821 03:34:12.530098    1442 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0821 03:34:12.539343    1442 out.go:204]   - Generating certificates and keys ...
	I0821 03:34:12.539375    1442 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0821 03:34:12.539413    1442 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0821 03:34:12.639909    1442 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0821 03:34:12.680054    1442 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0821 03:34:12.714095    1442 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0821 03:34:12.849965    1442 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0821 03:34:12.996137    1442 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0821 03:34:12.996199    1442 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-500000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0821 03:34:13.141022    1442 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0821 03:34:13.141102    1442 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-500000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0821 03:34:13.228117    1442 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0821 03:34:13.409230    1442 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0821 03:34:13.774136    1442 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0821 03:34:13.774180    1442 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0821 03:34:13.866700    1442 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0821 03:34:13.977782    1442 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0821 03:34:14.068222    1442 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0821 03:34:14.144551    1442 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0821 03:34:14.151809    1442 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0821 03:34:14.152307    1442 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0821 03:34:14.152438    1442 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0821 03:34:14.228545    1442 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0821 03:34:14.232527    1442 out.go:204]   - Booting up control plane ...
	I0821 03:34:14.232575    1442 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0821 03:34:14.232614    1442 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0821 03:34:14.232645    1442 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0821 03:34:14.236440    1442 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0821 03:34:14.238376    1442 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0821 03:34:18.241227    1442 kubeadm.go:322] [apiclient] All control plane components are healthy after 4.002539 seconds
	I0821 03:34:18.241427    1442 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0821 03:34:18.252886    1442 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0821 03:34:18.774491    1442 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0821 03:34:18.774728    1442 kubeadm.go:322] [mark-control-plane] Marking the node addons-500000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0821 03:34:19.280325    1442 kubeadm.go:322] [bootstrap-token] Using token: jvxtql.8wgzhr7nb5g9o93n
	I0821 03:34:19.286479    1442 out.go:204]   - Configuring RBAC rules ...
	I0821 03:34:19.286537    1442 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0821 03:34:19.290363    1442 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0821 03:34:19.293121    1442 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0821 03:34:19.294256    1442 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0821 03:34:19.295736    1442 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0821 03:34:19.296773    1442 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0821 03:34:19.301173    1442 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0821 03:34:19.474355    1442 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0821 03:34:19.693544    1442 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0821 03:34:19.694011    1442 kubeadm.go:322] 
	I0821 03:34:19.694043    1442 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0821 03:34:19.694047    1442 kubeadm.go:322] 
	I0821 03:34:19.694084    1442 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0821 03:34:19.694086    1442 kubeadm.go:322] 
	I0821 03:34:19.694099    1442 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0821 03:34:19.694192    1442 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0821 03:34:19.694216    1442 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0821 03:34:19.694219    1442 kubeadm.go:322] 
	I0821 03:34:19.694251    1442 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0821 03:34:19.694263    1442 kubeadm.go:322] 
	I0821 03:34:19.694293    1442 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0821 03:34:19.694296    1442 kubeadm.go:322] 
	I0821 03:34:19.694320    1442 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0821 03:34:19.694360    1442 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0821 03:34:19.694390    1442 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0821 03:34:19.694394    1442 kubeadm.go:322] 
	I0821 03:34:19.694446    1442 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0821 03:34:19.694488    1442 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0821 03:34:19.694495    1442 kubeadm.go:322] 
	I0821 03:34:19.694535    1442 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token jvxtql.8wgzhr7nb5g9o93n \
	I0821 03:34:19.694617    1442 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:c361d9930575cb4141f86c9c696a425212668e350af0245a5e7de41b1bd48407 \
	I0821 03:34:19.694632    1442 kubeadm.go:322] 	--control-plane 
	I0821 03:34:19.694634    1442 kubeadm.go:322] 
	I0821 03:34:19.694684    1442 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0821 03:34:19.694688    1442 kubeadm.go:322] 
	I0821 03:34:19.694735    1442 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token jvxtql.8wgzhr7nb5g9o93n \
	I0821 03:34:19.694782    1442 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:c361d9930575cb4141f86c9c696a425212668e350af0245a5e7de41b1bd48407 
	I0821 03:34:19.694835    1442 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0821 03:34:19.694840    1442 cni.go:84] Creating CNI manager for ""
	I0821 03:34:19.694847    1442 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0821 03:34:19.703814    1442 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0821 03:34:19.707890    1442 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0821 03:34:19.711023    1442 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0821 03:34:19.716873    1442 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0821 03:34:19.716924    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:19.716951    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43 minikube.k8s.io/name=addons-500000 minikube.k8s.io/updated_at=2023_08_21T03_34_19_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:19.723924    1442 ops.go:34] apiserver oom_adj: -16
	I0821 03:34:19.767999    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:19.814902    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:20.352169    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:20.852188    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:21.352164    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:21.852123    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:22.352346    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:22.852184    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:23.352159    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:23.852279    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:24.352116    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:24.852182    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:25.352203    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:25.852083    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:26.352293    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:26.852062    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:27.352046    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:27.851991    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:28.352173    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:28.851976    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:29.352173    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:29.851943    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:30.352016    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:30.851904    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:31.351923    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:31.851905    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:32.351835    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:32.388500    1442 kubeadm.go:1081] duration metric: took 12.671972458s to wait for elevateKubeSystemPrivileges.
	I0821 03:34:32.388516    1442 kubeadm.go:406] StartCluster complete in 20.01278175s
	I0821 03:34:32.388525    1442 settings.go:142] acquiring lock: {Name:mkeb461ec3a6a92ee32ce41e8df63d6759cb2728 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 03:34:32.388680    1442 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/17102-920/kubeconfig
	I0821 03:34:32.388902    1442 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17102-920/kubeconfig: {Name:mk2bc9c64ad130c36a0253707ac2ba3f8fd22371 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 03:34:32.389107    1442 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0821 03:34:32.389147    1442 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:true]
	I0821 03:34:32.389221    1442 addons.go:69] Setting volumesnapshots=true in profile "addons-500000"
	I0821 03:34:32.389227    1442 addons.go:231] Setting addon volumesnapshots=true in "addons-500000"
	I0821 03:34:32.389225    1442 addons.go:69] Setting cloud-spanner=true in profile "addons-500000"
	I0821 03:34:32.389236    1442 addons.go:231] Setting addon cloud-spanner=true in "addons-500000"
	I0821 03:34:32.389251    1442 config.go:182] Loaded profile config "addons-500000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0821 03:34:32.389271    1442 host.go:66] Checking if "addons-500000" exists ...
	I0821 03:34:32.389279    1442 addons.go:69] Setting storage-provisioner=true in profile "addons-500000"
	I0821 03:34:32.389222    1442 addons.go:69] Setting gcp-auth=true in profile "addons-500000"
	I0821 03:34:32.389282    1442 addons.go:231] Setting addon storage-provisioner=true in "addons-500000"
	I0821 03:34:32.389288    1442 mustload.go:65] Loading cluster: addons-500000
	I0821 03:34:32.389299    1442 host.go:66] Checking if "addons-500000" exists ...
	I0821 03:34:32.389299    1442 addons.go:69] Setting inspektor-gadget=true in profile "addons-500000"
	I0821 03:34:32.389327    1442 addons.go:69] Setting registry=true in profile "addons-500000"
	I0821 03:34:32.389360    1442 config.go:182] Loaded profile config "addons-500000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0821 03:34:32.389358    1442 addons.go:69] Setting ingress-dns=true in profile "addons-500000"
	I0821 03:34:32.389378    1442 addons.go:231] Setting addon ingress-dns=true in "addons-500000"
	I0821 03:34:32.389273    1442 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-500000"
	I0821 03:34:32.389396    1442 addons.go:69] Setting ingress=true in profile "addons-500000"
	I0821 03:34:32.389434    1442 host.go:66] Checking if "addons-500000" exists ...
	I0821 03:34:32.389418    1442 addons.go:69] Setting metrics-server=true in profile "addons-500000"
	I0821 03:34:32.389454    1442 addons.go:231] Setting addon metrics-server=true in "addons-500000"
	I0821 03:34:32.389465    1442 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-500000"
	I0821 03:34:32.389506    1442 host.go:66] Checking if "addons-500000" exists ...
	I0821 03:34:32.389519    1442 host.go:66] Checking if "addons-500000" exists ...
	I0821 03:34:32.389271    1442 host.go:66] Checking if "addons-500000" exists ...
	W0821 03:34:32.389564    1442 host.go:54] host status for "addons-500000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/monitor: connect: connection refused
	W0821 03:34:32.389572    1442 addons.go:277] "addons-500000" is not running, setting storage-provisioner=true and skipping enablement (err=<nil>)
	I0821 03:34:32.389347    1442 addons.go:231] Setting addon inspektor-gadget=true in "addons-500000"
	I0821 03:34:32.389693    1442 host.go:66] Checking if "addons-500000" exists ...
	W0821 03:34:32.389757    1442 host.go:54] host status for "addons-500000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/monitor: connect: connection refused
	W0821 03:34:32.389767    1442 addons.go:277] "addons-500000" is not running, setting cloud-spanner=true and skipping enablement (err=<nil>)
	I0821 03:34:32.389367    1442 addons.go:231] Setting addon registry=true in "addons-500000"
	I0821 03:34:32.389786    1442 host.go:66] Checking if "addons-500000" exists ...
	W0821 03:34:32.389790    1442 host.go:54] host status for "addons-500000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/monitor: connect: connection refused
	W0821 03:34:32.389796    1442 addons.go:277] "addons-500000" is not running, setting metrics-server=true and skipping enablement (err=<nil>)
	I0821 03:34:32.389799    1442 addons.go:467] Verifying addon metrics-server=true in "addons-500000"
	W0821 03:34:32.389788    1442 host.go:54] host status for "addons-500000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/monitor: connect: connection refused
	W0821 03:34:32.389803    1442 addons.go:277] "addons-500000" is not running, setting csi-hostpath-driver=true and skipping enablement (err=<nil>)
	I0821 03:34:32.389805    1442 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-500000"
	I0821 03:34:32.389275    1442 addons.go:69] Setting default-storageclass=true in profile "addons-500000"
	I0821 03:34:32.394058    1442 out.go:177] * Verifying csi-hostpath-driver addon...
	I0821 03:34:32.389436    1442 addons.go:231] Setting addon ingress=true in "addons-500000"
	I0821 03:34:32.389868    1442 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-500000"
	W0821 03:34:32.389953    1442 host.go:54] host status for "addons-500000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/monitor: connect: connection refused
	W0821 03:34:32.390033    1442 host.go:54] host status for "addons-500000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/monitor: connect: connection refused
	W0821 03:34:32.390053    1442 host.go:54] host status for "addons-500000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/monitor: connect: connection refused
	I0821 03:34:32.390510    1442 host.go:66] Checking if "addons-500000" exists ...
	I0821 03:34:32.409190    1442 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	W0821 03:34:32.404296    1442 addons.go:277] "addons-500000" is not running, setting ingress-dns=true and skipping enablement (err=<nil>)
	W0821 03:34:32.404342    1442 addons.go:277] "addons-500000" is not running, setting inspektor-gadget=true and skipping enablement (err=<nil>)
	W0821 03:34:32.404346    1442 addons.go:277] "addons-500000" is not running, setting registry=true and skipping enablement (err=<nil>)
	I0821 03:34:32.404410    1442 host.go:66] Checking if "addons-500000" exists ...
	I0821 03:34:32.404764    1442 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0821 03:34:32.413218    1442 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0821 03:34:32.413224    1442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0821 03:34:32.413232    1442 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/id_rsa Username:docker}
	I0821 03:34:32.413266    1442 addons.go:467] Verifying addon registry=true in "addons-500000"
	I0821 03:34:32.418274    1442 out.go:177] * Verifying registry addon...
	I0821 03:34:32.419795    1442 addons.go:231] Setting addon default-storageclass=true in "addons-500000"
	I0821 03:34:32.419868    1442 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-500000" context rescaled to 1 replicas
	I0821 03:34:32.420817    1442 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0821 03:34:32.421498    1442 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0821 03:34:32.421694    1442 host.go:66] Checking if "addons-500000" exists ...
	I0821 03:34:32.421701    1442 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0821 03:34:32.421849    1442 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0821 03:34:32.431173    1442 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0821 03:34:32.440212    1442 out.go:177] * Verifying Kubernetes components...
	I0821 03:34:32.431974    1442 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0821 03:34:32.435186    1442 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=registry
	I0821 03:34:32.444202    1442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0821 03:34:32.444209    1442 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0821 03:34:32.447466    1442 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0821 03:34:32.448196    1442 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.8.1
	I0821 03:34:32.448211    1442 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/id_rsa Username:docker}
	I0821 03:34:32.451292    1442 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0821 03:34:32.451299    1442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16083 bytes)
	I0821 03:34:32.451306    1442 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/id_rsa Username:docker}
	I0821 03:34:32.454351    1442 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0821 03:34:32.454358    1442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0821 03:34:32.485876    1442 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0821 03:34:32.485886    1442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0821 03:34:32.513135    1442 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0821 03:34:32.513147    1442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0821 03:34:32.532036    1442 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0821 03:34:32.532052    1442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0821 03:34:32.537566    1442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0821 03:34:32.542495    1442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0821 03:34:32.548533    1442 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0821 03:34:32.548541    1442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0821 03:34:32.568087    1442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0821 03:34:33.517324    1442 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.069159875s)
	I0821 03:34:33.517338    1442 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.069147125s)
	I0821 03:34:33.517342    1442 start.go:901] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I0821 03:34:33.517808    1442 node_ready.go:35] waiting up to 6m0s for node "addons-500000" to be "Ready" ...
	I0821 03:34:33.519592    1442 node_ready.go:49] node "addons-500000" has status "Ready":"True"
	I0821 03:34:33.519599    1442 node_ready.go:38] duration metric: took 1.779708ms waiting for node "addons-500000" to be "Ready" ...
	I0821 03:34:33.519602    1442 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0821 03:34:33.522687    1442 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-97rp7" in "kube-system" namespace to be "Ready" ...
	I0821 03:34:33.964195    1442 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.421717084s)
	I0821 03:34:33.964211    1442 addons.go:467] Verifying addon ingress=true in "addons-500000"
	I0821 03:34:33.968723    1442 out.go:177] * Verifying ingress addon...
	I0821 03:34:33.964338    1442 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.396275834s)
	W0821 03:34:33.968774    1442 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0821 03:34:33.975741    1442 retry.go:31] will retry after 231.591556ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0821 03:34:33.976141    1442 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0821 03:34:33.984299    1442 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0821 03:34:33.984307    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:33.987720    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:34.207434    1442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0821 03:34:34.491123    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:34.991180    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:35.490538    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:35.534205    1442 pod_ready.go:102] pod "coredns-5d78c9869d-97rp7" in "kube-system" namespace has status "Ready":"False"
	I0821 03:34:35.990628    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:36.490998    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:36.745839    1442 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.5384555s)
	I0821 03:34:36.990793    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:37.491119    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:37.534210    1442 pod_ready.go:102] pod "coredns-5d78c9869d-97rp7" in "kube-system" namespace has status "Ready":"False"
	I0821 03:34:37.990643    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:38.490772    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:38.997287    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:39.008172    1442 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0821 03:34:39.008186    1442 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/id_rsa Username:docker}
	I0821 03:34:39.055480    1442 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0821 03:34:39.064828    1442 addons.go:231] Setting addon gcp-auth=true in "addons-500000"
	I0821 03:34:39.064858    1442 host.go:66] Checking if "addons-500000" exists ...
	I0821 03:34:39.065649    1442 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0821 03:34:39.065660    1442 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/id_rsa Username:docker}
	I0821 03:34:39.100776    1442 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0821 03:34:39.103705    1442 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0821 03:34:39.107726    1442 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0821 03:34:39.107734    1442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0821 03:34:39.113078    1442 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0821 03:34:39.113087    1442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0821 03:34:39.127541    1442 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0821 03:34:39.127551    1442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5412 bytes)
	I0821 03:34:39.133486    1442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0821 03:34:39.491109    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:39.534694    1442 pod_ready.go:102] pod "coredns-5d78c9869d-97rp7" in "kube-system" namespace has status "Ready":"False"
	I0821 03:34:39.629710    1442 addons.go:467] Verifying addon gcp-auth=true in "addons-500000"
	I0821 03:34:39.641410    1442 out.go:177] * Verifying gcp-auth addon...
	I0821 03:34:39.650441    1442 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0821 03:34:39.656554    1442 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0821 03:34:39.656563    1442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 03:34:39.658191    1442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 03:34:39.991177    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:40.161154    1442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 03:34:40.492443    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:40.660810    1442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 03:34:40.990558    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:41.161357    1442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 03:34:41.492269    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:41.534695    1442 pod_ready.go:102] pod "coredns-5d78c9869d-97rp7" in "kube-system" namespace has status "Ready":"False"
	I0821 03:34:41.660947    1442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 03:34:41.990678    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:42.161013    1442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 03:34:42.490658    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:42.660884    1442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 03:34:42.990530    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:43.161042    1442 kapi.go:107] duration metric: took 3.510698166s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0821 03:34:43.165184    1442 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-500000 cluster.
	I0821 03:34:43.169238    1442 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0821 03:34:43.173158    1442 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0821 03:34:43.491145    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:43.534713    1442 pod_ready.go:97] pod "coredns-5d78c9869d-97rp7" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-08-21 03:34:32 -0700 PDT Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-08-21 03:34:32 -0700 PDT Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-08-21 03:34:32 -0700 PDT Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-08-21 03:34:32 -0700 PDT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.105.2 HostIPs:[] PodIP: PodIPs:[] StartTime:2023-08-21 03:34:32 -0700 PDT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerSt
ateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2023-08-21 03:34:33 -0700 PDT,FinishedAt:2023-08-21 03:34:43 -0700 PDT,ContainerID:docker://d9032391cb53f0fa8cfd4e1696eef2d7eb7096ba08423fd5087bb7b4d2fba5ed,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e ContainerID:docker://d9032391cb53f0fa8cfd4e1696eef2d7eb7096ba08423fd5087bb7b4d2fba5ed Started:0x140018d39a0 AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0821 03:34:43.534727    1442 pod_ready.go:81] duration metric: took 10.012309458s waiting for pod "coredns-5d78c9869d-97rp7" in "kube-system" namespace to be "Ready" ...
	E0821 03:34:43.534732    1442 pod_ready.go:66] WaitExtra: waitPodCondition: pod "coredns-5d78c9869d-97rp7" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-08-21 03:34:32 -0700 PDT Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-08-21 03:34:32 -0700 PDT Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-08-21 03:34:32 -0700 PDT Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-08-21 03:34:32 -0700 PDT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.105.2 HostIPs:[] PodIP: PodIPs:[] StartTime:2023-08-21 03:34:32 -0700 PDT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Runnin
g:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2023-08-21 03:34:33 -0700 PDT,FinishedAt:2023-08-21 03:34:43 -0700 PDT,ContainerID:docker://d9032391cb53f0fa8cfd4e1696eef2d7eb7096ba08423fd5087bb7b4d2fba5ed,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e ContainerID:docker://d9032391cb53f0fa8cfd4e1696eef2d7eb7096ba08423fd5087bb7b4d2fba5ed Started:0x140018d39a0 AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0821 03:34:43.534736    1442 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-hbg44" in "kube-system" namespace to be "Ready" ...
	I0821 03:34:43.537136    1442 pod_ready.go:92] pod "coredns-5d78c9869d-hbg44" in "kube-system" namespace has status "Ready":"True"
	I0821 03:34:43.537140    1442 pod_ready.go:81] duration metric: took 2.400375ms waiting for pod "coredns-5d78c9869d-hbg44" in "kube-system" namespace to be "Ready" ...
	I0821 03:34:43.537145    1442 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-500000" in "kube-system" namespace to be "Ready" ...
	I0821 03:34:43.539758    1442 pod_ready.go:92] pod "etcd-addons-500000" in "kube-system" namespace has status "Ready":"True"
	I0821 03:34:43.539762    1442 pod_ready.go:81] duration metric: took 2.614916ms waiting for pod "etcd-addons-500000" in "kube-system" namespace to be "Ready" ...
	I0821 03:34:43.539766    1442 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-500000" in "kube-system" namespace to be "Ready" ...
	I0821 03:34:43.542039    1442 pod_ready.go:92] pod "kube-apiserver-addons-500000" in "kube-system" namespace has status "Ready":"True"
	I0821 03:34:43.542045    1442 pod_ready.go:81] duration metric: took 2.276584ms waiting for pod "kube-apiserver-addons-500000" in "kube-system" namespace to be "Ready" ...
	I0821 03:34:43.542049    1442 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-500000" in "kube-system" namespace to be "Ready" ...
	I0821 03:34:43.544341    1442 pod_ready.go:92] pod "kube-controller-manager-addons-500000" in "kube-system" namespace has status "Ready":"True"
	I0821 03:34:43.544345    1442 pod_ready.go:81] duration metric: took 2.2935ms waiting for pod "kube-controller-manager-addons-500000" in "kube-system" namespace to be "Ready" ...
	I0821 03:34:43.544348    1442 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-z2wj9" in "kube-system" namespace to be "Ready" ...
	I0821 03:34:43.933736    1442 pod_ready.go:92] pod "kube-proxy-z2wj9" in "kube-system" namespace has status "Ready":"True"
	I0821 03:34:43.933748    1442 pod_ready.go:81] duration metric: took 389.407375ms waiting for pod "kube-proxy-z2wj9" in "kube-system" namespace to be "Ready" ...
	I0821 03:34:43.933752    1442 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-500000" in "kube-system" namespace to be "Ready" ...
	I0821 03:34:43.990470    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:44.334535    1442 pod_ready.go:92] pod "kube-scheduler-addons-500000" in "kube-system" namespace has status "Ready":"True"
	I0821 03:34:44.334545    1442 pod_ready.go:81] duration metric: took 400.801125ms waiting for pod "kube-scheduler-addons-500000" in "kube-system" namespace to be "Ready" ...
	I0821 03:34:44.334549    1442 pod_ready.go:38] duration metric: took 10.81524225s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0821 03:34:44.334558    1442 api_server.go:52] waiting for apiserver process to appear ...
	I0821 03:34:44.334639    1442 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0821 03:34:44.339980    1442 api_server.go:72] duration metric: took 11.909098333s to wait for apiserver process to appear ...
	I0821 03:34:44.339987    1442 api_server.go:88] waiting for apiserver healthz status ...
	I0821 03:34:44.339993    1442 api_server.go:253] Checking apiserver healthz at https://192.168.105.2:8443/healthz ...
	I0821 03:34:44.344178    1442 api_server.go:279] https://192.168.105.2:8443/healthz returned 200:
	ok
	I0821 03:34:44.344920    1442 api_server.go:141] control plane version: v1.27.4
	I0821 03:34:44.344925    1442 api_server.go:131] duration metric: took 4.936ms to wait for apiserver health ...
	I0821 03:34:44.344929    1442 system_pods.go:43] waiting for kube-system pods to appear ...
	I0821 03:34:44.490452    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:44.535983    1442 system_pods.go:59] 8 kube-system pods found
	I0821 03:34:44.535991    1442 system_pods.go:61] "coredns-5d78c9869d-hbg44" [2212048e-385c-4235-ad14-1b9e4e812106] Running
	I0821 03:34:44.535994    1442 system_pods.go:61] "etcd-addons-500000" [dcde2eed-b2a3-4b2d-af51-14d42189714c] Running
	I0821 03:34:44.536011    1442 system_pods.go:61] "kube-apiserver-addons-500000" [a4c38aeb-a7ef-4239-ac34-2437f9c67d96] Running
	I0821 03:34:44.536015    1442 system_pods.go:61] "kube-controller-manager-addons-500000" [972b1e42-cd56-4f77-ad52-a1df2b79fdae] Running
	I0821 03:34:44.536018    1442 system_pods.go:61] "kube-proxy-z2wj9" [56cdd0e9-2b8f-476e-be08-a52381eecb16] Running
	I0821 03:34:44.536020    1442 system_pods.go:61] "kube-scheduler-addons-500000" [c2d2f1e5-45c6-48a9-990d-7e32d9d75976] Running
	I0821 03:34:44.536022    1442 system_pods.go:61] "snapshot-controller-75bbb956b9-4pgqh" [7452ce04-2fbb-4f7a-9e5f-87b8b577fc94] Running
	I0821 03:34:44.536025    1442 system_pods.go:61] "snapshot-controller-75bbb956b9-j9mkf" [dbd2a297-29a5-4435-8fb1-849d8ae91771] Running
	I0821 03:34:44.536028    1442 system_pods.go:74] duration metric: took 191.1015ms to wait for pod list to return data ...
	I0821 03:34:44.536033    1442 default_sa.go:34] waiting for default service account to be created ...
	I0821 03:34:44.734042    1442 default_sa.go:45] found service account: "default"
	I0821 03:34:44.734051    1442 default_sa.go:55] duration metric: took 198.020583ms for default service account to be created ...
	I0821 03:34:44.734055    1442 system_pods.go:116] waiting for k8s-apps to be running ...
	I0821 03:34:44.935348    1442 system_pods.go:86] 8 kube-system pods found
	I0821 03:34:44.935359    1442 system_pods.go:89] "coredns-5d78c9869d-hbg44" [2212048e-385c-4235-ad14-1b9e4e812106] Running
	I0821 03:34:44.935362    1442 system_pods.go:89] "etcd-addons-500000" [dcde2eed-b2a3-4b2d-af51-14d42189714c] Running
	I0821 03:34:44.935365    1442 system_pods.go:89] "kube-apiserver-addons-500000" [a4c38aeb-a7ef-4239-ac34-2437f9c67d96] Running
	I0821 03:34:44.935367    1442 system_pods.go:89] "kube-controller-manager-addons-500000" [972b1e42-cd56-4f77-ad52-a1df2b79fdae] Running
	I0821 03:34:44.935369    1442 system_pods.go:89] "kube-proxy-z2wj9" [56cdd0e9-2b8f-476e-be08-a52381eecb16] Running
	I0821 03:34:44.935372    1442 system_pods.go:89] "kube-scheduler-addons-500000" [c2d2f1e5-45c6-48a9-990d-7e32d9d75976] Running
	I0821 03:34:44.935374    1442 system_pods.go:89] "snapshot-controller-75bbb956b9-4pgqh" [7452ce04-2fbb-4f7a-9e5f-87b8b577fc94] Running
	I0821 03:34:44.935376    1442 system_pods.go:89] "snapshot-controller-75bbb956b9-j9mkf" [dbd2a297-29a5-4435-8fb1-849d8ae91771] Running
	I0821 03:34:44.935380    1442 system_pods.go:126] duration metric: took 201.327917ms to wait for k8s-apps to be running ...
	I0821 03:34:44.935391    1442 system_svc.go:44] waiting for kubelet service to be running ....
	I0821 03:34:44.935475    1442 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0821 03:34:44.941643    1442 system_svc.go:56] duration metric: took 6.252209ms WaitForService to wait for kubelet.
	I0821 03:34:44.941651    1442 kubeadm.go:581] duration metric: took 12.5107865s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0821 03:34:44.941660    1442 node_conditions.go:102] verifying NodePressure condition ...
	I0821 03:34:44.990746    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:45.134674    1442 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0821 03:34:45.134706    1442 node_conditions.go:123] node cpu capacity is 2
	I0821 03:34:45.134712    1442 node_conditions.go:105] duration metric: took 193.055083ms to run NodePressure ...
	I0821 03:34:45.134717    1442 start.go:228] waiting for startup goroutines ...
	I0821 03:34:45.490470    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:45.990643    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:46.490327    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:46.990587    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:47.490536    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:47.990358    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:48.490279    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:48.990490    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:49.490328    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:49.990414    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:50.490337    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:50.990260    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:51.490639    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:51.989843    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:52.490813    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:52.990112    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:53.491005    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:53.992627    1442 kapi.go:107] duration metric: took 20.017033875s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0821 03:40:32.405313    1442 kapi.go:107] duration metric: took 6m0.010490834s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	W0821 03:40:32.405643    1442 out.go:239] ! Enabling 'csi-hostpath-driver' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=csi-hostpath-driver pods: context deadline exceeded]
	I0821 03:40:32.421828    1442 kapi.go:107] duration metric: took 6m0.009978583s to wait for kubernetes.io/minikube-addons=registry ...
	W0821 03:40:32.421921    1442 out.go:239] ! Enabling 'registry' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=registry pods: context deadline exceeded]
	I0821 03:40:32.430174    1442 out.go:177] * Enabled addons: storage-provisioner, cloud-spanner, metrics-server, ingress-dns, inspektor-gadget, default-storageclass, volumesnapshots, gcp-auth, ingress
	I0821 03:40:32.437176    1442 addons.go:502] enable addons completed in 6m0.058033333s: enabled=[storage-provisioner cloud-spanner metrics-server ingress-dns inspektor-gadget default-storageclass volumesnapshots gcp-auth ingress]
	I0821 03:40:32.437214    1442 start.go:233] waiting for cluster config update ...
	I0821 03:40:32.437252    1442 start.go:242] writing updated cluster config ...
	I0821 03:40:32.438394    1442 ssh_runner.go:195] Run: rm -f paused
	I0821 03:40:32.505190    1442 start.go:600] kubectl: 1.27.2, cluster: 1.27.4 (minor skew: 0)
	I0821 03:40:32.509248    1442 out.go:177] * Done! kubectl is now configured to use "addons-500000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Mon 2023-08-21 10:34:00 UTC, ends at Mon 2023-08-21 11:04:42 UTC. --
	Aug 21 11:04:15 addons-500000 dockerd[1153]: time="2023-08-21T11:04:15.513744257Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 21 11:04:15 addons-500000 dockerd[1148]: time="2023-08-21T11:04:15.557393445Z" level=info msg="ignoring event" container=61cb73773eecc3faafe56084535ad2d59c6b1097346767deab59c844d247f185 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 21 11:04:15 addons-500000 dockerd[1153]: time="2023-08-21T11:04:15.557558654Z" level=info msg="shim disconnected" id=61cb73773eecc3faafe56084535ad2d59c6b1097346767deab59c844d247f185 namespace=moby
	Aug 21 11:04:15 addons-500000 dockerd[1153]: time="2023-08-21T11:04:15.557585820Z" level=warning msg="cleaning up after shim disconnected" id=61cb73773eecc3faafe56084535ad2d59c6b1097346767deab59c844d247f185 namespace=moby
	Aug 21 11:04:15 addons-500000 dockerd[1153]: time="2023-08-21T11:04:15.557590195Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 21 11:04:35 addons-500000 dockerd[1153]: time="2023-08-21T11:04:35.223660239Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 21 11:04:35 addons-500000 dockerd[1153]: time="2023-08-21T11:04:35.223919947Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 21 11:04:35 addons-500000 dockerd[1153]: time="2023-08-21T11:04:35.223951739Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 21 11:04:35 addons-500000 dockerd[1153]: time="2023-08-21T11:04:35.223977614Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 21 11:04:35 addons-500000 cri-dockerd[1049]: time="2023-08-21T11:04:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a2fdb8bd4cd8bccaf693b10cbd476696a65b5dc74f77697818159456635d2392/resolv.conf as [nameserver 10.96.0.10 search headlamp.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Aug 21 11:04:35 addons-500000 dockerd[1148]: time="2023-08-21T11:04:35.590239318Z" level=warning msg="reference for unknown type: " digest="sha256:498ea22dc5acadaa4015e7a50335d21fdce45d9e8f1f8adf29c2777da4182f98" remote="ghcr.io/headlamp-k8s/headlamp@sha256:498ea22dc5acadaa4015e7a50335d21fdce45d9e8f1f8adf29c2777da4182f98"
	Aug 21 11:04:36 addons-500000 dockerd[1148]: time="2023-08-21T11:04:36.471134255Z" level=info msg="Container failed to exit within 1s of signal 15 - using the force" container=734d7d69c9e8bff04a74f5ce2f78304cb992055330ae2198a5c1e05f571cd97e
	Aug 21 11:04:36 addons-500000 dockerd[1148]: time="2023-08-21T11:04:36.544752455Z" level=info msg="ignoring event" container=734d7d69c9e8bff04a74f5ce2f78304cb992055330ae2198a5c1e05f571cd97e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 21 11:04:36 addons-500000 dockerd[1153]: time="2023-08-21T11:04:36.544910872Z" level=info msg="shim disconnected" id=734d7d69c9e8bff04a74f5ce2f78304cb992055330ae2198a5c1e05f571cd97e namespace=moby
	Aug 21 11:04:36 addons-500000 dockerd[1153]: time="2023-08-21T11:04:36.544944080Z" level=warning msg="cleaning up after shim disconnected" id=734d7d69c9e8bff04a74f5ce2f78304cb992055330ae2198a5c1e05f571cd97e namespace=moby
	Aug 21 11:04:36 addons-500000 dockerd[1153]: time="2023-08-21T11:04:36.544949997Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 21 11:04:36 addons-500000 dockerd[1148]: time="2023-08-21T11:04:36.616105443Z" level=info msg="ignoring event" container=bbb4a4c960656b62bb19b9b067c655ea39e12d8756d8701729b8421b997616a1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 21 11:04:36 addons-500000 dockerd[1153]: time="2023-08-21T11:04:36.616341902Z" level=info msg="shim disconnected" id=bbb4a4c960656b62bb19b9b067c655ea39e12d8756d8701729b8421b997616a1 namespace=moby
	Aug 21 11:04:36 addons-500000 dockerd[1153]: time="2023-08-21T11:04:36.616383610Z" level=warning msg="cleaning up after shim disconnected" id=bbb4a4c960656b62bb19b9b067c655ea39e12d8756d8701729b8421b997616a1 namespace=moby
	Aug 21 11:04:36 addons-500000 dockerd[1153]: time="2023-08-21T11:04:36.616387944Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 21 11:04:39 addons-500000 cri-dockerd[1049]: time="2023-08-21T11:04:39Z" level=info msg="Stop pulling image ghcr.io/headlamp-k8s/headlamp:v0.19.0@sha256:498ea22dc5acadaa4015e7a50335d21fdce45d9e8f1f8adf29c2777da4182f98: Status: Downloaded newer image for ghcr.io/headlamp-k8s/headlamp@sha256:498ea22dc5acadaa4015e7a50335d21fdce45d9e8f1f8adf29c2777da4182f98"
	Aug 21 11:04:39 addons-500000 dockerd[1153]: time="2023-08-21T11:04:39.296702808Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 21 11:04:39 addons-500000 dockerd[1153]: time="2023-08-21T11:04:39.296734308Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 21 11:04:39 addons-500000 dockerd[1153]: time="2023-08-21T11:04:39.297047017Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 21 11:04:39 addons-500000 dockerd[1153]: time="2023-08-21T11:04:39.297059683Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                         ATTEMPT             POD ID
	77e5446fdd2e0       ghcr.io/headlamp-k8s/headlamp@sha256:498ea22dc5acadaa4015e7a50335d21fdce45d9e8f1f8adf29c2777da4182f98                        3 seconds ago       Running             headlamp                     0                   a2fdb8bd4cd8b
	61cb73773eecc       13753a81eccfd                                                                                                                27 seconds ago      Exited              hello-world-app              4                   a244270f71415
	12742b2537ff1       nginx@sha256:cac882be2b7305e0c8d3e3cd0575a2fd58f5fde6dd5d6299605aa0f3e67ca385                                                2 minutes ago       Running             nginx                        0                   ca7496b30bdd4
	dbe5746b118a6       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf                 30 minutes ago      Running             gcp-auth                     0                   31154fc41fc35
	fc5767357c5d9       8f2588812ab29                                                                                                                30 minutes ago      Exited              patch                        1                   0538e79b5c883
	aa7d89a7d68d0       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:543c40fd093964bc9ab509d3e791f9989963021f1e9e4c9c7b6700b02bfb227b   30 minutes ago      Exited              create                       0                   3c078f4b9885e
	7979593c9bb52       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280      30 minutes ago      Running             volume-snapshot-controller   0                   70a68685a69fb
	fe9609fabef21       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280      30 minutes ago      Running             volume-snapshot-controller   0                   39eda7944d576
	16cfb4c805080       97e04611ad434                                                                                                                30 minutes ago      Running             coredns                      0                   b6fa8f87ea743
	36558206e7ebf       532e5a30e948f                                                                                                                30 minutes ago      Running             kube-proxy                   0                   ccc8633d52ca6
	bd48baf71b163       6eb63895cb67f                                                                                                                30 minutes ago      Running             kube-scheduler               0                   65c9ea48d27ae
	27dc2c0d7a4a5       24bc64e911039                                                                                                                30 minutes ago      Running             etcd                         0                   0f2cdc52bbda6
	dc949a6ce14c1       64aece92d6bde                                                                                                                30 minutes ago      Running             kube-apiserver               0                   090daa0e10080
	41982c5e9fc8f       389f6f052cf83                                                                                                                30 minutes ago      Running             kube-controller-manager      0                   a9c3d15b86bf8
	
	* 
	* ==> coredns [16cfb4c80508] <==
	* [INFO] 10.244.0.11:55380 - 15444 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000192417s
	[INFO] 10.244.0.11:55595 - 33986 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000080917s
	[INFO] 10.244.0.11:55380 - 36243 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000177876s
	[INFO] 10.244.0.11:55380 - 42834 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000146333s
	[INFO] 10.244.0.11:55595 - 5784 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00011875s
	[INFO] 10.244.0.11:55595 - 56910 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000050292s
	[INFO] 10.244.0.11:55380 - 35306 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000218333s
	[INFO] 10.244.0.11:55595 - 64077 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000055958s
	[INFO] 10.244.0.11:55595 - 56884 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000076625s
	[INFO] 10.244.0.11:55595 - 56007 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000070583s
	[INFO] 10.244.0.11:55595 - 54545 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000067333s
	[INFO] 10.244.0.11:51497 - 59355 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000398834s
	[INFO] 10.244.0.11:51497 - 38991 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000209708s
	[INFO] 10.244.0.11:51497 - 6555 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000191958s
	[INFO] 10.244.0.11:51497 - 63288 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000409876s
	[INFO] 10.244.0.11:51497 - 49529 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00012975s
	[INFO] 10.244.0.11:51497 - 3686 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000123626s
	[INFO] 10.244.0.11:51497 - 19423 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000240209s
	[INFO] 10.244.0.11:59481 - 42442 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000222709s
	[INFO] 10.244.0.11:59481 - 36904 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.0001005s
	[INFO] 10.244.0.11:59481 - 14729 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000057417s
	[INFO] 10.244.0.11:59481 - 55234 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000074708s
	[INFO] 10.244.0.11:59481 - 58225 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000045917s
	[INFO] 10.244.0.11:59481 - 23418 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00004575s
	[INFO] 10.244.0.11:59481 - 13624 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000090417s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-500000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-500000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43
	                    minikube.k8s.io/name=addons-500000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_08_21T03_34_19_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 21 Aug 2023 10:34:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-500000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 21 Aug 2023 11:04:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 21 Aug 2023 11:02:58 +0000   Mon, 21 Aug 2023 10:34:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 21 Aug 2023 11:02:58 +0000   Mon, 21 Aug 2023 10:34:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 21 Aug 2023 11:02:58 +0000   Mon, 21 Aug 2023 10:34:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 21 Aug 2023 11:02:58 +0000   Mon, 21 Aug 2023 10:34:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.2
	  Hostname:    addons-500000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	System Info:
	  Machine ID:                 0e4a1f71467c44c8a10eca186773afe2
	  System UUID:                0e4a1f71467c44c8a10eca186773afe2
	  Boot ID:                    6d5e7ffc-fb7d-41fe-b076-69fd8535d300
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.4
	  Kubelet Version:            v1.27.4
	  Kube-Proxy Version:         v1.27.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-65bdb79f98-l7sq4         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m5s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m16s
	  gcp-auth                    gcp-auth-58478865f7-zcg47                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         30m
	  headlamp                    headlamp-5c78f74d8d-llcss                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8s
	  kube-system                 coredns-5d78c9869d-hbg44                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     30m
	  kube-system                 etcd-addons-500000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         30m
	  kube-system                 kube-apiserver-addons-500000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         30m
	  kube-system                 kube-controller-manager-addons-500000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         30m
	  kube-system                 kube-proxy-z2wj9                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         30m
	  kube-system                 kube-scheduler-addons-500000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         30m
	  kube-system                 snapshot-controller-75bbb956b9-4pgqh     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         30m
	  kube-system                 snapshot-controller-75bbb956b9-j9mkf     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         30m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 30m   kube-proxy       
	  Normal  Starting                 30m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  30m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  30m   kubelet          Node addons-500000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    30m   kubelet          Node addons-500000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     30m   kubelet          Node addons-500000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                30m   kubelet          Node addons-500000 status is now: NodeReady
	  Normal  RegisteredNode           30m   node-controller  Node addons-500000 event: Registered Node addons-500000 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.490829] systemd-fstab-generator[117]: Ignoring "noauto" for root device
	[  +0.044680] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000871] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[Aug21 10:34] systemd-fstab-generator[479]: Ignoring "noauto" for root device
	[  +0.063431] systemd-fstab-generator[490]: Ignoring "noauto" for root device
	[  +0.413293] systemd-fstab-generator[750]: Ignoring "noauto" for root device
	[  +0.194883] systemd-fstab-generator[786]: Ignoring "noauto" for root device
	[  +0.079334] systemd-fstab-generator[797]: Ignoring "noauto" for root device
	[  +0.075319] systemd-fstab-generator[810]: Ignoring "noauto" for root device
	[  +1.241580] systemd-fstab-generator[968]: Ignoring "noauto" for root device
	[  +0.080868] systemd-fstab-generator[979]: Ignoring "noauto" for root device
	[  +0.070572] systemd-fstab-generator[990]: Ignoring "noauto" for root device
	[  +0.067357] systemd-fstab-generator[1001]: Ignoring "noauto" for root device
	[  +0.069942] systemd-fstab-generator[1042]: Ignoring "noauto" for root device
	[  +2.503453] systemd-fstab-generator[1141]: Ignoring "noauto" for root device
	[  +2.381640] kauditd_printk_skb: 53 callbacks suppressed
	[  +2.661766] systemd-fstab-generator[1457]: Ignoring "noauto" for root device
	[  +5.156537] systemd-fstab-generator[2350]: Ignoring "noauto" for root device
	[ +13.738428] kauditd_printk_skb: 41 callbacks suppressed
	[  +1.700338] TCP: eth0: Driver has suspect GRO implementation, TCP performance may be compromised.
	[  +4.800757] kauditd_printk_skb: 48 callbacks suppressed
	[ +14.143799] kauditd_printk_skb: 54 callbacks suppressed
	[Aug21 11:02] kauditd_printk_skb: 1 callbacks suppressed
	[Aug21 11:04] kauditd_printk_skb: 1 callbacks suppressed
	[  +5.307462] kauditd_printk_skb: 10 callbacks suppressed
	
	* 
	* ==> etcd [27dc2c0d7a4a] <==
	* {"level":"info","ts":"2023-08-21T10:34:15.991Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-21T10:34:15.991Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-08-21T10:34:15.991Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-08-21T10:34:15.992Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-08-21T10:34:16.003Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-08-21T10:34:15.992Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-08-21T10:34:16.003Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.105.2:2379"}
	{"level":"info","ts":"2023-08-21T10:34:15.992Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6e03e7863b4f9c54","local-member-id":"c46d288d2fcb0590","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-21T10:34:16.003Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-21T10:34:16.003Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-21T10:44:16.025Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":841}
	{"level":"info","ts":"2023-08-21T10:44:16.028Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":841,"took":"2.672822ms","hash":3376273956}
	{"level":"info","ts":"2023-08-21T10:44:16.028Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3376273956,"revision":841,"compact-revision":-1}
	{"level":"info","ts":"2023-08-21T10:49:16.035Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1031}
	{"level":"info","ts":"2023-08-21T10:49:16.038Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1031,"took":"1.375633ms","hash":1895539758}
	{"level":"info","ts":"2023-08-21T10:49:16.038Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1895539758,"revision":1031,"compact-revision":841}
	{"level":"info","ts":"2023-08-21T10:54:16.045Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1222}
	{"level":"info","ts":"2023-08-21T10:54:16.047Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1222,"took":"1.459351ms","hash":3279763987}
	{"level":"info","ts":"2023-08-21T10:54:16.047Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3279763987,"revision":1222,"compact-revision":1031}
	{"level":"info","ts":"2023-08-21T10:59:16.058Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1413}
	{"level":"info","ts":"2023-08-21T10:59:16.061Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1413,"took":"1.488371ms","hash":1268235317}
	{"level":"info","ts":"2023-08-21T10:59:16.061Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1268235317,"revision":1413,"compact-revision":1222}
	{"level":"info","ts":"2023-08-21T11:04:16.067Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1603}
	{"level":"info","ts":"2023-08-21T11:04:16.069Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1603,"took":"1.243127ms","hash":1670643557}
	{"level":"info","ts":"2023-08-21T11:04:16.070Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1670643557,"revision":1603,"compact-revision":1413}
	
	* 
	* ==> gcp-auth [dbe5746b118a] <==
	* 2023/08/21 10:34:42 GCP Auth Webhook started!
	2023/08/21 11:02:26 Ready to marshal response ...
	2023/08/21 11:02:26 Ready to write response ...
	2023/08/21 11:02:37 Ready to marshal response ...
	2023/08/21 11:02:37 Ready to write response ...
	2023/08/21 11:04:34 Ready to marshal response ...
	2023/08/21 11:04:34 Ready to write response ...
	2023/08/21 11:04:34 Ready to marshal response ...
	2023/08/21 11:04:34 Ready to write response ...
	2023/08/21 11:04:34 Ready to marshal response ...
	2023/08/21 11:04:34 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  11:04:43 up 30 min,  0 users,  load average: 0.31, 0.42, 0.34
	Linux addons-500000 5.10.57 #1 SMP PREEMPT Fri Jul 14 22:49:12 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [dc949a6ce14c] <==
	* I0821 10:49:16.766169       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0821 10:54:16.749624       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0821 10:54:16.750123       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0821 10:54:16.755478       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0821 10:54:16.755644       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0821 10:54:16.765351       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0821 10:54:16.765428       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0821 10:59:16.750519       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0821 10:59:16.751153       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0821 10:59:16.751904       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0821 10:59:16.752113       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0821 10:59:16.761892       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0821 10:59:16.761965       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0821 11:02:26.738684       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I0821 11:02:26.869600       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs=map[IPv4:10.111.106.162]
	I0821 11:02:37.171860       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs=map[IPv4:10.102.172.159]
	I0821 11:04:16.751175       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0821 11:04:16.751671       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0821 11:04:16.751839       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0821 11:04:16.751936       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0821 11:04:16.752119       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0821 11:04:16.752232       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0821 11:04:34.815110       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs=map[IPv4:10.104.124.111]
	E0821 11:04:35.469619       1 authentication.go:70] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	E0821 11:04:35.737559       1 authentication.go:70] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	
	* 
	* ==> kube-controller-manager [41982c5e9fc8] <==
	* I0821 10:34:42.858553       1 event.go:307] "Event occurred" object="gcp-auth/gcp-auth-certs-patch" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0821 10:34:42.858609       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0821 10:34:42.859646       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0821 10:34:42.893612       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0821 10:34:42.895861       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0821 10:34:42.897862       1 event.go:307] "Event occurred" object="gcp-auth/gcp-auth-certs-create" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0821 10:34:42.897954       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0821 10:34:42.899189       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0821 10:35:01.688712       1 resource_quota_monitor.go:223] "QuotaMonitor created object count evaluator" resource="volumesnapshots.snapshot.storage.k8s.io"
	I0821 10:35:01.688853       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0821 10:35:01.789717       1 shared_informer.go:318] Caches are synced for resource quota
	I0821 10:35:02.109377       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0821 10:35:02.210585       1 shared_informer.go:318] Caches are synced for garbage collector
	I0821 10:35:12.010356       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0821 10:35:12.011197       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0821 10:35:12.022044       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0821 10:35:12.024702       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0821 11:02:37.084707       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-65bdb79f98 to 1"
	I0821 11:02:37.090750       1 event.go:307] "Event occurred" object="default/hello-world-app-65bdb79f98" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-65bdb79f98-l7sq4"
	I0821 11:04:34.826986       1 event.go:307] "Event occurred" object="headlamp/headlamp" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set headlamp-5c78f74d8d to 1"
	I0821 11:04:34.830389       1 event.go:307] "Event occurred" object="headlamp/headlamp-5c78f74d8d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"headlamp-5c78f74d8d-\" is forbidden: error looking up service account headlamp/headlamp: serviceaccount \"headlamp\" not found"
	E0821 11:04:34.834097       1 replica_set.go:544] sync "headlamp/headlamp-5c78f74d8d" failed with pods "headlamp-5c78f74d8d-" is forbidden: error looking up service account headlamp/headlamp: serviceaccount "headlamp" not found
	I0821 11:04:34.844227       1 event.go:307] "Event occurred" object="headlamp/headlamp-5c78f74d8d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: headlamp-5c78f74d8d-llcss"
	I0821 11:04:35.452239       1 job_controller.go:523] enqueueing job ingress-nginx/ingress-nginx-admission-create
	I0821 11:04:35.466674       1 job_controller.go:523] enqueueing job ingress-nginx/ingress-nginx-admission-patch
	
	* 
	* ==> kube-proxy [36558206e7eb] <==
	* I0821 10:34:32.961845       1 node.go:141] Successfully retrieved node IP: 192.168.105.2
	I0821 10:34:32.961903       1 server_others.go:110] "Detected node IP" address="192.168.105.2"
	I0821 10:34:32.961922       1 server_others.go:554] "Using iptables proxy"
	I0821 10:34:32.984111       1 server_others.go:178] "kube-proxy running in single-stack mode: secondary ipFamily is not supported" ipFamily=IPv6
	I0821 10:34:32.984124       1 server_others.go:192] "Using iptables Proxier"
	I0821 10:34:32.984147       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0821 10:34:32.984347       1 server.go:658] "Version info" version="v1.27.4"
	I0821 10:34:32.984357       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0821 10:34:32.984958       1 config.go:315] "Starting node config controller"
	I0821 10:34:32.984965       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0821 10:34:32.985291       1 config.go:188] "Starting service config controller"
	I0821 10:34:32.985295       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0821 10:34:32.985301       1 config.go:97] "Starting endpoint slice config controller"
	I0821 10:34:32.985318       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0821 10:34:33.085576       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0821 10:34:33.085604       1 shared_informer.go:318] Caches are synced for node config
	I0821 10:34:33.085608       1 shared_informer.go:318] Caches are synced for service config
	
	* 
	* ==> kube-scheduler [bd48baf71b16] <==
	* W0821 10:34:16.768490       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0821 10:34:16.768493       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0821 10:34:16.768508       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0821 10:34:16.768511       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0821 10:34:16.768562       1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0821 10:34:16.768566       1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0821 10:34:17.606010       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0821 10:34:17.606029       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0821 10:34:17.645166       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0821 10:34:17.645193       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0821 10:34:17.674598       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0821 10:34:17.674623       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0821 10:34:17.707767       1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0821 10:34:17.707781       1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0821 10:34:17.724040       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0821 10:34:17.724057       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0821 10:34:17.728085       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0821 10:34:17.728146       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0821 10:34:17.756871       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0821 10:34:17.756889       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0821 10:34:17.785527       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0821 10:34:17.785576       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0821 10:34:17.785527       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0821 10:34:17.785647       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0821 10:34:20.949364       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-08-21 10:34:00 UTC, ends at Mon 2023-08-21 11:04:43 UTC. --
	Aug 21 11:04:19 addons-500000 kubelet[2369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 21 11:04:19 addons-500000 kubelet[2369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 21 11:04:19 addons-500000 kubelet[2369]:  > table=nat chain=KUBE-KUBELET-CANARY
	Aug 21 11:04:30 addons-500000 kubelet[2369]: I0821 11:04:30.452665    2369 scope.go:115] "RemoveContainer" containerID="61cb73773eecc3faafe56084535ad2d59c6b1097346767deab59c844d247f185"
	Aug 21 11:04:30 addons-500000 kubelet[2369]: E0821 11:04:30.456370    2369 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=hello-world-app pod=hello-world-app-65bdb79f98-l7sq4_default(03900f9a-54f5-4d53-8e78-2fb31aa983b5)\"" pod="default/hello-world-app-65bdb79f98-l7sq4" podUID=03900f9a-54f5-4d53-8e78-2fb31aa983b5
	Aug 21 11:04:34 addons-500000 kubelet[2369]: I0821 11:04:34.853525    2369 topology_manager.go:212] "Topology Admit Handler"
	Aug 21 11:04:35 addons-500000 kubelet[2369]: I0821 11:04:35.042133    2369 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/eadedc67-c7c0-4100-b508-c6e015e959bb-gcp-creds\") pod \"headlamp-5c78f74d8d-llcss\" (UID: \"eadedc67-c7c0-4100-b508-c6e015e959bb\") " pod="headlamp/headlamp-5c78f74d8d-llcss"
	Aug 21 11:04:35 addons-500000 kubelet[2369]: I0821 11:04:35.042173    2369 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l78nw\" (UniqueName: \"kubernetes.io/projected/eadedc67-c7c0-4100-b508-c6e015e959bb-kube-api-access-l78nw\") pod \"headlamp-5c78f74d8d-llcss\" (UID: \"eadedc67-c7c0-4100-b508-c6e015e959bb\") " pod="headlamp/headlamp-5c78f74d8d-llcss"
	Aug 21 11:04:35 addons-500000 kubelet[2369]: E0821 11:04:35.463060    2369 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7799c6795f-4ppd9.177d612bbdfe556b", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7799c6795f-4ppd9", UID:"c950764c-9601-4c76-adb3-ddb61bd6335d", APIVersion:"v1", ResourceVersion:"453", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stopping container controller", Source:v1.EventSource{Componen
t:"kubelet", Host:"addons-500000"}, FirstTimestamp:time.Date(2023, time.August, 21, 11, 4, 35, 460224363, time.Local), LastTimestamp:time.Date(2023, time.August, 21, 11, 4, 35, 460224363, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7799c6795f-4ppd9.177d612bbdfe556b" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Aug 21 11:04:36 addons-500000 kubelet[2369]: I0821 11:04:36.755180    2369 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vbn82\" (UniqueName: \"kubernetes.io/projected/c950764c-9601-4c76-adb3-ddb61bd6335d-kube-api-access-vbn82\") pod \"c950764c-9601-4c76-adb3-ddb61bd6335d\" (UID: \"c950764c-9601-4c76-adb3-ddb61bd6335d\") "
	Aug 21 11:04:36 addons-500000 kubelet[2369]: I0821 11:04:36.755204    2369 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c950764c-9601-4c76-adb3-ddb61bd6335d-webhook-cert\") pod \"c950764c-9601-4c76-adb3-ddb61bd6335d\" (UID: \"c950764c-9601-4c76-adb3-ddb61bd6335d\") "
	Aug 21 11:04:36 addons-500000 kubelet[2369]: I0821 11:04:36.759728    2369 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c950764c-9601-4c76-adb3-ddb61bd6335d-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "c950764c-9601-4c76-adb3-ddb61bd6335d" (UID: "c950764c-9601-4c76-adb3-ddb61bd6335d"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Aug 21 11:04:36 addons-500000 kubelet[2369]: I0821 11:04:36.759756    2369 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c950764c-9601-4c76-adb3-ddb61bd6335d-kube-api-access-vbn82" (OuterVolumeSpecName: "kube-api-access-vbn82") pod "c950764c-9601-4c76-adb3-ddb61bd6335d" (UID: "c950764c-9601-4c76-adb3-ddb61bd6335d"). InnerVolumeSpecName "kube-api-access-vbn82". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 21 11:04:36 addons-500000 kubelet[2369]: I0821 11:04:36.856139    2369 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c950764c-9601-4c76-adb3-ddb61bd6335d-webhook-cert\") on node \"addons-500000\" DevicePath \"\""
	Aug 21 11:04:36 addons-500000 kubelet[2369]: I0821 11:04:36.856154    2369 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-vbn82\" (UniqueName: \"kubernetes.io/projected/c950764c-9601-4c76-adb3-ddb61bd6335d-kube-api-access-vbn82\") on node \"addons-500000\" DevicePath \"\""
	Aug 21 11:04:37 addons-500000 kubelet[2369]: I0821 11:04:37.053385    2369 scope.go:115] "RemoveContainer" containerID="734d7d69c9e8bff04a74f5ce2f78304cb992055330ae2198a5c1e05f571cd97e"
	Aug 21 11:04:37 addons-500000 kubelet[2369]: I0821 11:04:37.063376    2369 scope.go:115] "RemoveContainer" containerID="734d7d69c9e8bff04a74f5ce2f78304cb992055330ae2198a5c1e05f571cd97e"
	Aug 21 11:04:37 addons-500000 kubelet[2369]: E0821 11:04:37.063766    2369 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 734d7d69c9e8bff04a74f5ce2f78304cb992055330ae2198a5c1e05f571cd97e" containerID="734d7d69c9e8bff04a74f5ce2f78304cb992055330ae2198a5c1e05f571cd97e"
	Aug 21 11:04:37 addons-500000 kubelet[2369]: I0821 11:04:37.063786    2369 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:docker ID:734d7d69c9e8bff04a74f5ce2f78304cb992055330ae2198a5c1e05f571cd97e} err="failed to get container status \"734d7d69c9e8bff04a74f5ce2f78304cb992055330ae2198a5c1e05f571cd97e\": rpc error: code = Unknown desc = Error response from daemon: No such container: 734d7d69c9e8bff04a74f5ce2f78304cb992055330ae2198a5c1e05f571cd97e"
	Aug 21 11:04:37 addons-500000 kubelet[2369]: I0821 11:04:37.455804    2369 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=77ccc95d-6635-4336-b1c2-59548fdeea28 path="/var/lib/kubelet/pods/77ccc95d-6635-4336-b1c2-59548fdeea28/volumes"
	Aug 21 11:04:37 addons-500000 kubelet[2369]: I0821 11:04:37.455973    2369 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=c950764c-9601-4c76-adb3-ddb61bd6335d path="/var/lib/kubelet/pods/c950764c-9601-4c76-adb3-ddb61bd6335d/volumes"
	Aug 21 11:04:37 addons-500000 kubelet[2369]: I0821 11:04:37.456108    2369 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=e0a3c68e-5aaa-440d-a98d-7826d75c0519 path="/var/lib/kubelet/pods/e0a3c68e-5aaa-440d-a98d-7826d75c0519/volumes"
	Aug 21 11:04:40 addons-500000 kubelet[2369]: I0821 11:04:40.119763    2369 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="headlamp/headlamp-5c78f74d8d-llcss" podStartSLOduration=2.251334804 podCreationTimestamp="2023-08-21 11:04:34 +0000 UTC" firstStartedPulling="2023-08-21 11:04:35.36043408 +0000 UTC m=+1816.001548947" lastFinishedPulling="2023-08-21 11:04:39.228820032 +0000 UTC m=+1819.869934941" observedRunningTime="2023-08-21 11:04:40.109914907 +0000 UTC m=+1820.751029816" watchObservedRunningTime="2023-08-21 11:04:40.119720798 +0000 UTC m=+1820.760835707"
	Aug 21 11:04:41 addons-500000 kubelet[2369]: I0821 11:04:41.451970    2369 scope.go:115] "RemoveContainer" containerID="61cb73773eecc3faafe56084535ad2d59c6b1097346767deab59c844d247f185"
	Aug 21 11:04:41 addons-500000 kubelet[2369]: E0821 11:04:41.452254    2369 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=hello-world-app pod=hello-world-app-65bdb79f98-l7sq4_default(03900f9a-54f5-4d53-8e78-2fb31aa983b5)\"" pod="default/hello-world-app-65bdb79f98-l7sq4" podUID=03900f9a-54f5-4d53-8e78-2fb31aa983b5
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p addons-500000 -n addons-500000
helpers_test.go:261: (dbg) Run:  kubectl --context addons-500000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (136.82s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (480.94s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:329: TestAddons/parallel/InspektorGadget: WARNING: pod list for "gadget" "k8s-app=gadget" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:814: ***** TestAddons/parallel/InspektorGadget: pod "k8s-app=gadget" failed to start within 8m0s: context deadline exceeded ****
addons_test.go:814: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p addons-500000 -n addons-500000
addons_test.go:814: TestAddons/parallel/InspektorGadget: showing logs for failed pods as of 2023-08-21 04:02:25.595525 -0700 PDT m=+1750.562550876
addons_test.go:815: failed waiting for inspektor-gadget pod: k8s-app=gadget within 8m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p addons-500000 -n addons-500000
helpers_test.go:244: <<< TestAddons/parallel/InspektorGadget FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/InspektorGadget]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p addons-500000 logs -n 25
helpers_test.go:252: TestAddons/parallel/InspektorGadget logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-670000 | jenkins | v1.31.2 | 21 Aug 23 03:33 PDT |                     |
	|         | -p download-only-670000           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=qemu2                    |                      |         |         |                     |                     |
	| start   | -o=json --download-only           | download-only-670000 | jenkins | v1.31.2 | 21 Aug 23 03:33 PDT |                     |
	|         | -p download-only-670000           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.4      |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=qemu2                    |                      |         |         |                     |                     |
	| start   | -o=json --download-only           | download-only-670000 | jenkins | v1.31.2 | 21 Aug 23 03:33 PDT |                     |
	|         | -p download-only-670000           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.0-rc.1 |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=qemu2                    |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.31.2 | 21 Aug 23 03:33 PDT | 21 Aug 23 03:33 PDT |
	| delete  | -p download-only-670000           | download-only-670000 | jenkins | v1.31.2 | 21 Aug 23 03:33 PDT | 21 Aug 23 03:33 PDT |
	| delete  | -p download-only-670000           | download-only-670000 | jenkins | v1.31.2 | 21 Aug 23 03:33 PDT | 21 Aug 23 03:33 PDT |
	| start   | --download-only -p                | binary-mirror-462000 | jenkins | v1.31.2 | 21 Aug 23 03:33 PDT |                     |
	|         | binary-mirror-462000              |                      |         |         |                     |                     |
	|         | --alsologtostderr                 |                      |         |         |                     |                     |
	|         | --binary-mirror                   |                      |         |         |                     |                     |
	|         | http://127.0.0.1:49329            |                      |         |         |                     |                     |
	|         | --driver=qemu2                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-462000           | binary-mirror-462000 | jenkins | v1.31.2 | 21 Aug 23 03:33 PDT | 21 Aug 23 03:33 PDT |
	| start   | -p addons-500000                  | addons-500000        | jenkins | v1.31.2 | 21 Aug 23 03:33 PDT | 21 Aug 23 03:40 PDT |
	|         | --wait=true --memory=4000         |                      |         |         |                     |                     |
	|         | --alsologtostderr                 |                      |         |         |                     |                     |
	|         | --addons=registry                 |                      |         |         |                     |                     |
	|         | --addons=metrics-server           |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots          |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver      |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                 |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner            |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget         |                      |         |         |                     |                     |
	|         | --driver=qemu2                    |                      |         |         |                     |                     |
	|         | --addons=ingress                  |                      |         |         |                     |                     |
	|         | --addons=ingress-dns              |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p          | addons-500000        | jenkins | v1.31.2 | 21 Aug 23 03:52 PDT |                     |
	|         | addons-500000                     |                      |         |         |                     |                     |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/21 03:33:48
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.20.7 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0821 03:33:48.415064    1442 out.go:296] Setting OutFile to fd 1 ...
	I0821 03:33:48.415176    1442 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 03:33:48.415179    1442 out.go:309] Setting ErrFile to fd 2...
	I0821 03:33:48.415182    1442 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 03:33:48.415284    1442 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17102-920/.minikube/bin
	I0821 03:33:48.416485    1442 out.go:303] Setting JSON to false
	I0821 03:33:48.431675    1442 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":202,"bootTime":1692613826,"procs":392,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0821 03:33:48.431757    1442 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0821 03:33:48.436776    1442 out.go:177] * [addons-500000] minikube v1.31.2 on Darwin 13.5 (arm64)
	I0821 03:33:48.443786    1442 out.go:177]   - MINIKUBE_LOCATION=17102
	I0821 03:33:48.443817    1442 notify.go:220] Checking for updates...
	I0821 03:33:48.452754    1442 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17102-920/kubeconfig
	I0821 03:33:48.459793    1442 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0821 03:33:48.466761    1442 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0821 03:33:48.469754    1442 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17102-920/.minikube
	I0821 03:33:48.472801    1442 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0821 03:33:48.476845    1442 driver.go:373] Setting default libvirt URI to qemu:///system
	I0821 03:33:48.479685    1442 out.go:177] * Using the qemu2 driver based on user configuration
	I0821 03:33:48.486794    1442 start.go:298] selected driver: qemu2
	I0821 03:33:48.486801    1442 start.go:902] validating driver "qemu2" against <nil>
	I0821 03:33:48.486809    1442 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0821 03:33:48.488928    1442 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0821 03:33:48.491687    1442 out.go:177] * Automatically selected the socket_vmnet network
	I0821 03:33:48.495787    1442 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0821 03:33:48.495806    1442 cni.go:84] Creating CNI manager for ""
	I0821 03:33:48.495814    1442 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0821 03:33:48.495818    1442 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0821 03:33:48.495823    1442 start_flags.go:319] config:
	{Name:addons-500000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:addons-500000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:c
ni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0821 03:33:48.500226    1442 iso.go:125] acquiring lock: {Name:mk813ea611542195bb0511881888be3fabc72ff6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 03:33:48.506762    1442 out.go:177] * Starting control plane node addons-500000 in cluster addons-500000
	I0821 03:33:48.510761    1442 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime docker
	I0821 03:33:48.510781    1442 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17102-920/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4
	I0821 03:33:48.510799    1442 cache.go:57] Caching tarball of preloaded images
	I0821 03:33:48.510861    1442 preload.go:174] Found /Users/jenkins/minikube-integration/17102-920/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0821 03:33:48.510867    1442 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.4 on docker
	I0821 03:33:48.511057    1442 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/config.json ...
	I0821 03:33:48.511069    1442 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/config.json: {Name:mke6ea6a330608889e821054234e4dab41e05376 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 03:33:48.511283    1442 start.go:365] acquiring machines lock for addons-500000: {Name:mk9b32d9fe994be32d77812db464b2cfa7bfb400 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0821 03:33:48.511397    1442 start.go:369] acquired machines lock for "addons-500000" in 109.25µs
	I0821 03:33:48.511409    1442 start.go:93] Provisioning new machine with config: &{Name:addons-500000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:
addons-500000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0821 03:33:48.511444    1442 start.go:125] createHost starting for "" (driver="qemu2")
	I0821 03:33:48.515777    1442 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0821 03:33:48.825711    1442 start.go:159] libmachine.API.Create for "addons-500000" (driver="qemu2")
	I0821 03:33:48.825759    1442 client.go:168] LocalClient.Create starting
	I0821 03:33:48.825907    1442 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca.pem
	I0821 03:33:48.926786    1442 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/17102-920/.minikube/certs/cert.pem
	I0821 03:33:49.005435    1442 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17102-920/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17102-920/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0821 03:33:49.429478    1442 main.go:141] libmachine: Creating SSH key...
	I0821 03:33:49.603069    1442 main.go:141] libmachine: Creating Disk image...
	I0821 03:33:49.603078    1442 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0821 03:33:49.603290    1442 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/disk.qcow2.raw /Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/disk.qcow2
	I0821 03:33:49.637224    1442 main.go:141] libmachine: STDOUT: 
	I0821 03:33:49.637249    1442 main.go:141] libmachine: STDERR: 
	I0821 03:33:49.637377    1442 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/disk.qcow2 +20000M
	I0821 03:33:49.644766    1442 main.go:141] libmachine: STDOUT: Image resized.
	
	I0821 03:33:49.644778    1442 main.go:141] libmachine: STDERR: 
	I0821 03:33:49.644801    1442 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/disk.qcow2
	I0821 03:33:49.644808    1442 main.go:141] libmachine: Starting QEMU VM...
	I0821 03:33:49.644850    1442 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:15:38:20:81:6d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/disk.qcow2
	I0821 03:33:49.712858    1442 main.go:141] libmachine: STDOUT: 
	I0821 03:33:49.712896    1442 main.go:141] libmachine: STDERR: 
	I0821 03:33:49.712900    1442 main.go:141] libmachine: Attempt 0
	I0821 03:33:49.712923    1442 main.go:141] libmachine: Searching for 5e:15:38:20:81:6d in /var/db/dhcpd_leases ...
	I0821 03:33:51.714037    1442 main.go:141] libmachine: Attempt 1
	I0821 03:33:51.714122    1442 main.go:141] libmachine: Searching for 5e:15:38:20:81:6d in /var/db/dhcpd_leases ...
	I0821 03:33:53.715339    1442 main.go:141] libmachine: Attempt 2
	I0821 03:33:53.715370    1442 main.go:141] libmachine: Searching for 5e:15:38:20:81:6d in /var/db/dhcpd_leases ...
	I0821 03:33:55.716394    1442 main.go:141] libmachine: Attempt 3
	I0821 03:33:55.716406    1442 main.go:141] libmachine: Searching for 5e:15:38:20:81:6d in /var/db/dhcpd_leases ...
	I0821 03:33:57.717443    1442 main.go:141] libmachine: Attempt 4
	I0821 03:33:57.717472    1442 main.go:141] libmachine: Searching for 5e:15:38:20:81:6d in /var/db/dhcpd_leases ...
	I0821 03:33:59.718558    1442 main.go:141] libmachine: Attempt 5
	I0821 03:33:59.718579    1442 main.go:141] libmachine: Searching for 5e:15:38:20:81:6d in /var/db/dhcpd_leases ...
	I0821 03:34:01.719634    1442 main.go:141] libmachine: Attempt 6
	I0821 03:34:01.719657    1442 main.go:141] libmachine: Searching for 5e:15:38:20:81:6d in /var/db/dhcpd_leases ...
	I0821 03:34:01.719810    1442 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0821 03:34:01.719849    1442 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:5e:15:38:20:81:6d ID:1,5e:15:38:20:81:6d Lease:0x64e48f18}
	I0821 03:34:01.719855    1442 main.go:141] libmachine: Found match: 5e:15:38:20:81:6d
	I0821 03:34:01.719867    1442 main.go:141] libmachine: IP: 192.168.105.2
	I0821 03:34:01.719873    1442 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.2)...
	I0821 03:34:03.738025    1442 machine.go:88] provisioning docker machine ...
	I0821 03:34:03.738086    1442 buildroot.go:166] provisioning hostname "addons-500000"
	I0821 03:34:03.739549    1442 main.go:141] libmachine: Using SSH client type: native
	I0821 03:34:03.740347    1442 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102aae1e0] 0x102ab0c40 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0821 03:34:03.740367    1442 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-500000 && echo "addons-500000" | sudo tee /etc/hostname
	I0821 03:34:03.826570    1442 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-500000
	
	I0821 03:34:03.826696    1442 main.go:141] libmachine: Using SSH client type: native
	I0821 03:34:03.827174    1442 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102aae1e0] 0x102ab0c40 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0821 03:34:03.827189    1442 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-500000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-500000/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-500000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0821 03:34:03.891757    1442 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0821 03:34:03.891772    1442 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17102-920/.minikube CaCertPath:/Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17102-920/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17102-920/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17102-920/.minikube}
	I0821 03:34:03.891782    1442 buildroot.go:174] setting up certificates
	I0821 03:34:03.891796    1442 provision.go:83] configureAuth start
	I0821 03:34:03.891801    1442 provision.go:138] copyHostCerts
	I0821 03:34:03.891982    1442 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17102-920/.minikube/ca.pem (1078 bytes)
	I0821 03:34:03.892356    1442 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17102-920/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17102-920/.minikube/cert.pem (1123 bytes)
	I0821 03:34:03.892494    1442 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17102-920/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17102-920/.minikube/key.pem (1679 bytes)
	I0821 03:34:03.892606    1442 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17102-920/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca-key.pem org=jenkins.addons-500000 san=[192.168.105.2 192.168.105.2 localhost 127.0.0.1 minikube addons-500000]
	I0821 03:34:04.055231    1442 provision.go:172] copyRemoteCerts
	I0821 03:34:04.055290    1442 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0821 03:34:04.055299    1442 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/id_rsa Username:docker}
	I0821 03:34:04.085022    1442 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0821 03:34:04.091757    1442 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0821 03:34:04.098302    1442 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0821 03:34:04.105297    1442 provision.go:86] duration metric: configureAuth took 213.489792ms
	I0821 03:34:04.105304    1442 buildroot.go:189] setting minikube options for container-runtime
	I0821 03:34:04.105410    1442 config.go:182] Loaded profile config "addons-500000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0821 03:34:04.105443    1442 main.go:141] libmachine: Using SSH client type: native
	I0821 03:34:04.105658    1442 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102aae1e0] 0x102ab0c40 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0821 03:34:04.105665    1442 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0821 03:34:04.160033    1442 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0821 03:34:04.160039    1442 buildroot.go:70] root file system type: tmpfs
	I0821 03:34:04.160095    1442 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0821 03:34:04.160145    1442 main.go:141] libmachine: Using SSH client type: native
	I0821 03:34:04.160376    1442 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102aae1e0] 0x102ab0c40 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0821 03:34:04.160410    1442 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0821 03:34:04.217511    1442 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0821 03:34:04.217555    1442 main.go:141] libmachine: Using SSH client type: native
	I0821 03:34:04.217777    1442 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102aae1e0] 0x102ab0c40 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0821 03:34:04.217788    1442 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0821 03:34:04.516566    1442 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0821 03:34:04.516576    1442 machine.go:91] provisioned docker machine in 778.543875ms
	I0821 03:34:04.516581    1442 client.go:171] LocalClient.Create took 15.691254833s
	I0821 03:34:04.516600    1442 start.go:167] duration metric: libmachine.API.Create for "addons-500000" took 15.691329875s
	I0821 03:34:04.516605    1442 start.go:300] post-start starting for "addons-500000" (driver="qemu2")
	I0821 03:34:04.516610    1442 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0821 03:34:04.516676    1442 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0821 03:34:04.516684    1442 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/id_rsa Username:docker}
	I0821 03:34:04.547645    1442 ssh_runner.go:195] Run: cat /etc/os-release
	I0821 03:34:04.548977    1442 info.go:137] Remote host: Buildroot 2021.02.12
	I0821 03:34:04.548988    1442 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17102-920/.minikube/addons for local assets ...
	I0821 03:34:04.549067    1442 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17102-920/.minikube/files for local assets ...
	I0821 03:34:04.549094    1442 start.go:303] post-start completed in 32.487208ms
	I0821 03:34:04.549503    1442 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/config.json ...
	I0821 03:34:04.549671    1442 start.go:128] duration metric: createHost completed in 16.038665083s
	I0821 03:34:04.549713    1442 main.go:141] libmachine: Using SSH client type: native
	I0821 03:34:04.549937    1442 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102aae1e0] 0x102ab0c40 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0821 03:34:04.549942    1442 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0821 03:34:04.603319    1442 main.go:141] libmachine: SSH cmd err, output: <nil>: 1692614044.503149419
	
	I0821 03:34:04.603325    1442 fix.go:206] guest clock: 1692614044.503149419
	I0821 03:34:04.603329    1442 fix.go:219] Guest: 2023-08-21 03:34:04.503149419 -0700 PDT Remote: 2023-08-21 03:34:04.549674 -0700 PDT m=+16.153755168 (delta=-46.524581ms)
	I0821 03:34:04.603340    1442 fix.go:190] guest clock delta is within tolerance: -46.524581ms
	I0821 03:34:04.603349    1442 start.go:83] releasing machines lock for "addons-500000", held for 16.092394834s
	I0821 03:34:04.603625    1442 ssh_runner.go:195] Run: cat /version.json
	I0821 03:34:04.603635    1442 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/id_rsa Username:docker}
	I0821 03:34:04.603639    1442 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0821 03:34:04.603685    1442 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/id_rsa Username:docker}
	I0821 03:34:04.631400    1442 ssh_runner.go:195] Run: systemctl --version
	I0821 03:34:04.633303    1442 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0821 03:34:04.675003    1442 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0821 03:34:04.675044    1442 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0821 03:34:04.680093    1442 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0821 03:34:04.680102    1442 start.go:466] detecting cgroup driver to use...
	I0821 03:34:04.680217    1442 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0821 03:34:04.685575    1442 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0821 03:34:04.689003    1442 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0821 03:34:04.692463    1442 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0821 03:34:04.692496    1442 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0821 03:34:04.695492    1442 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0821 03:34:04.698438    1442 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0821 03:34:04.701779    1442 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0821 03:34:04.705308    1442 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0821 03:34:04.708997    1442 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0821 03:34:04.712485    1442 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0821 03:34:04.715157    1442 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0821 03:34:04.718062    1442 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0821 03:34:04.801182    1442 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0821 03:34:04.809752    1442 start.go:466] detecting cgroup driver to use...
	I0821 03:34:04.809829    1442 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0821 03:34:04.815491    1442 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0821 03:34:04.820439    1442 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0821 03:34:04.826330    1442 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0821 03:34:04.831197    1442 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0821 03:34:04.835955    1442 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0821 03:34:04.893707    1442 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0821 03:34:04.899704    1442 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0821 03:34:04.905738    1442 ssh_runner.go:195] Run: which cri-dockerd
	I0821 03:34:04.907314    1442 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0821 03:34:04.910018    1442 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0821 03:34:04.915159    1442 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0821 03:34:04.993497    1442 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0821 03:34:05.073322    1442 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0821 03:34:05.073337    1442 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0821 03:34:05.078736    1442 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0821 03:34:05.148942    1442 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0821 03:34:06.310888    1442 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.161962625s)
	I0821 03:34:06.310946    1442 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0821 03:34:06.389910    1442 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0821 03:34:06.470512    1442 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0821 03:34:06.540771    1442 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0821 03:34:06.608028    1442 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0821 03:34:06.614951    1442 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0821 03:34:06.680856    1442 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0821 03:34:06.705016    1442 start.go:513] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0821 03:34:06.705100    1442 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0821 03:34:06.707492    1442 start.go:534] Will wait 60s for crictl version
	I0821 03:34:06.707526    1442 ssh_runner.go:195] Run: which crictl
	I0821 03:34:06.708906    1442 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0821 03:34:06.723485    1442 start.go:550] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.4
	RuntimeApiVersion:  v1alpha2
	I0821 03:34:06.723553    1442 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0821 03:34:06.733136    1442 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0821 03:34:06.752243    1442 out.go:204] * Preparing Kubernetes v1.27.4 on Docker 24.0.4 ...
	I0821 03:34:06.752395    1442 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0821 03:34:06.753728    1442 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0821 03:34:06.757671    1442 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime docker
	I0821 03:34:06.757717    1442 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0821 03:34:06.767699    1442 docker.go:636] Got preloaded images: 
	I0821 03:34:06.767706    1442 docker.go:642] registry.k8s.io/kube-apiserver:v1.27.4 wasn't preloaded
	I0821 03:34:06.767758    1442 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0821 03:34:06.770623    1442 ssh_runner.go:195] Run: which lz4
	I0821 03:34:06.772016    1442 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0821 03:34:06.773407    1442 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0821 03:34:06.773426    1442 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (343658271 bytes)
	I0821 03:34:08.065715    1442 docker.go:600] Took 1.293779 seconds to copy over tarball
	I0821 03:34:08.065776    1442 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0821 03:34:09.083194    1442 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.017432542s)
	I0821 03:34:09.083208    1442 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0821 03:34:09.098174    1442 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0821 03:34:09.101758    1442 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0821 03:34:09.107271    1442 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0821 03:34:09.185186    1442 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0821 03:34:11.583398    1442 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.398262792s)
	I0821 03:34:11.583497    1442 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0821 03:34:11.599112    1442 docker.go:636] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.4
	registry.k8s.io/kube-controller-manager:v1.27.4
	registry.k8s.io/kube-scheduler:v1.27.4
	registry.k8s.io/kube-proxy:v1.27.4
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0821 03:34:11.599121    1442 cache_images.go:84] Images are preloaded, skipping loading
	I0821 03:34:11.599173    1442 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0821 03:34:11.606813    1442 cni.go:84] Creating CNI manager for ""
	I0821 03:34:11.606822    1442 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0821 03:34:11.606852    1442 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0821 03:34:11.606862    1442 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.2 APIServerPort:8443 KubernetesVersion:v1.27.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-500000 NodeName:addons-500000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0821 03:34:11.606930    1442 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-500000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0821 03:34:11.606959    1442 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=addons-500000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.4 ClusterName:addons-500000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0821 03:34:11.607013    1442 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.4
	I0821 03:34:11.609958    1442 binaries.go:44] Found k8s binaries, skipping transfer
	I0821 03:34:11.609992    1442 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0821 03:34:11.613080    1442 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I0821 03:34:11.618135    1442 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0821 03:34:11.623217    1442 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2099 bytes)
	I0821 03:34:11.628067    1442 ssh_runner.go:195] Run: grep 192.168.105.2	control-plane.minikube.internal$ /etc/hosts
	I0821 03:34:11.629338    1442 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0821 03:34:11.633264    1442 certs.go:56] Setting up /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000 for IP: 192.168.105.2
	I0821 03:34:11.633272    1442 certs.go:190] acquiring lock for shared ca certs: {Name:mkaf8bee91c9bef113528e728629bac5c142d5d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 03:34:11.633419    1442 certs.go:204] generating minikubeCA CA: /Users/jenkins/minikube-integration/17102-920/.minikube/ca.key
	I0821 03:34:11.709497    1442 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17102-920/.minikube/ca.crt ...
	I0821 03:34:11.709504    1442 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17102-920/.minikube/ca.crt: {Name:mk11304afc04d282dffa1bbfafecb7763b86f0d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 03:34:11.709741    1442 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17102-920/.minikube/ca.key ...
	I0821 03:34:11.709747    1442 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17102-920/.minikube/ca.key: {Name:mk7632addcfceaabe09bce428c8dd59051132a6a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 03:34:11.709856    1442 certs.go:204] generating proxyClientCA CA: /Users/jenkins/minikube-integration/17102-920/.minikube/proxy-client-ca.key
	I0821 03:34:11.928292    1442 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17102-920/.minikube/proxy-client-ca.crt ...
	I0821 03:34:11.928298    1442 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17102-920/.minikube/proxy-client-ca.crt: {Name:mk59ba2d6f1e462ee2e456d21a76e6acaba82b70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 03:34:11.928531    1442 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17102-920/.minikube/proxy-client-ca.key ...
	I0821 03:34:11.928534    1442 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17102-920/.minikube/proxy-client-ca.key: {Name:mk02c96134c44ce7714696be07e0b5c22f58dc64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 03:34:11.928684    1442 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/client.key
	I0821 03:34:11.928691    1442 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/client.crt with IP's: []
	I0821 03:34:12.116170    1442 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/client.crt ...
	I0821 03:34:12.116177    1442 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/client.crt: {Name:mk3182b685506ec2dbfcad41054e3ffc2bf0f3b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 03:34:12.116379    1442 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/client.key ...
	I0821 03:34:12.116384    1442 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/client.key: {Name:mk087ee0a568a92e1e97ae6eb06dd6604454b2e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 03:34:12.116489    1442 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/apiserver.key.96055969
	I0821 03:34:12.116499    1442 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/apiserver.crt.96055969 with IP's: [192.168.105.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0821 03:34:12.174634    1442 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/apiserver.crt.96055969 ...
	I0821 03:34:12.174637    1442 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/apiserver.crt.96055969: {Name:mk02f137a3a75334a28e6811666f6d1dde47709c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 03:34:12.174771    1442 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/apiserver.key.96055969 ...
	I0821 03:34:12.174774    1442 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/apiserver.key.96055969: {Name:mk629f60ce1370d0aadb852a255428713cef631b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 03:34:12.174873    1442 certs.go:337] copying /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/apiserver.crt.96055969 -> /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/apiserver.crt
	I0821 03:34:12.175028    1442 certs.go:341] copying /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/apiserver.key.96055969 -> /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/apiserver.key
	I0821 03:34:12.175114    1442 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/proxy-client.key
	I0821 03:34:12.175123    1442 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/proxy-client.crt with IP's: []
	I0821 03:34:12.291172    1442 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/proxy-client.crt ...
	I0821 03:34:12.291175    1442 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/proxy-client.crt: {Name:mk4861ba5de37ed8d82543663b167ed0e04664dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 03:34:12.291331    1442 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/proxy-client.key ...
	I0821 03:34:12.291334    1442 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/proxy-client.key: {Name:mk5eb1fb206858f7f6262a3b86ec8673fdeb4399 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 03:34:12.291586    1442 certs.go:437] found cert: /Users/jenkins/minikube-integration/17102-920/.minikube/certs/Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca-key.pem (1679 bytes)
	I0821 03:34:12.291611    1442 certs.go:437] found cert: /Users/jenkins/minikube-integration/17102-920/.minikube/certs/Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca.pem (1078 bytes)
	I0821 03:34:12.291633    1442 certs.go:437] found cert: /Users/jenkins/minikube-integration/17102-920/.minikube/certs/Users/jenkins/minikube-integration/17102-920/.minikube/certs/cert.pem (1123 bytes)
	I0821 03:34:12.291654    1442 certs.go:437] found cert: /Users/jenkins/minikube-integration/17102-920/.minikube/certs/Users/jenkins/minikube-integration/17102-920/.minikube/certs/key.pem (1679 bytes)
	I0821 03:34:12.292029    1442 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0821 03:34:12.300489    1442 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0821 03:34:12.307765    1442 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0821 03:34:12.314499    1442 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0821 03:34:12.321449    1442 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0821 03:34:12.328965    1442 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0821 03:34:12.336085    1442 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0821 03:34:12.342676    1442 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0821 03:34:12.349529    1442 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0821 03:34:12.356907    1442 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0821 03:34:12.363000    1442 ssh_runner.go:195] Run: openssl version
	I0821 03:34:12.364943    1442 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0821 03:34:12.368659    1442 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0821 03:34:12.370316    1442 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 21 10:34 /usr/share/ca-certificates/minikubeCA.pem
	I0821 03:34:12.370337    1442 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0821 03:34:12.372170    1442 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0821 03:34:12.375051    1442 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0821 03:34:12.376254    1442 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0821 03:34:12.376292    1442 kubeadm.go:404] StartCluster: {Name:addons-500000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:addons-500000 Namespac
e:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mount
UID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0821 03:34:12.376353    1442 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0821 03:34:12.381765    1442 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0821 03:34:12.385127    1442 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0821 03:34:12.388050    1442 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0821 03:34:12.390699    1442 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0821 03:34:12.390714    1442 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0821 03:34:12.412358    1442 kubeadm.go:322] [init] Using Kubernetes version: v1.27.4
	I0821 03:34:12.412390    1442 kubeadm.go:322] [preflight] Running pre-flight checks
	I0821 03:34:12.465080    1442 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0821 03:34:12.465135    1442 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0821 03:34:12.465183    1442 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0821 03:34:12.530098    1442 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0821 03:34:12.539343    1442 out.go:204]   - Generating certificates and keys ...
	I0821 03:34:12.539375    1442 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0821 03:34:12.539413    1442 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0821 03:34:12.639909    1442 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0821 03:34:12.680054    1442 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0821 03:34:12.714095    1442 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0821 03:34:12.849965    1442 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0821 03:34:12.996137    1442 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0821 03:34:12.996199    1442 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-500000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0821 03:34:13.141022    1442 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0821 03:34:13.141102    1442 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-500000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0821 03:34:13.228117    1442 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0821 03:34:13.409230    1442 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0821 03:34:13.774136    1442 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0821 03:34:13.774180    1442 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0821 03:34:13.866700    1442 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0821 03:34:13.977782    1442 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0821 03:34:14.068222    1442 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0821 03:34:14.144551    1442 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0821 03:34:14.151809    1442 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0821 03:34:14.152307    1442 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0821 03:34:14.152438    1442 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0821 03:34:14.228545    1442 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0821 03:34:14.232527    1442 out.go:204]   - Booting up control plane ...
	I0821 03:34:14.232575    1442 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0821 03:34:14.232614    1442 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0821 03:34:14.232645    1442 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0821 03:34:14.236440    1442 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0821 03:34:14.238376    1442 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0821 03:34:18.241227    1442 kubeadm.go:322] [apiclient] All control plane components are healthy after 4.002539 seconds
	I0821 03:34:18.241427    1442 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0821 03:34:18.252886    1442 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0821 03:34:18.774491    1442 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0821 03:34:18.774728    1442 kubeadm.go:322] [mark-control-plane] Marking the node addons-500000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0821 03:34:19.280325    1442 kubeadm.go:322] [bootstrap-token] Using token: jvxtql.8wgzhr7nb5g9o93n
	I0821 03:34:19.286479    1442 out.go:204]   - Configuring RBAC rules ...
	I0821 03:34:19.286537    1442 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0821 03:34:19.290363    1442 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0821 03:34:19.293121    1442 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0821 03:34:19.294256    1442 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0821 03:34:19.295736    1442 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0821 03:34:19.296773    1442 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0821 03:34:19.301173    1442 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0821 03:34:19.474355    1442 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0821 03:34:19.693544    1442 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0821 03:34:19.694011    1442 kubeadm.go:322] 
	I0821 03:34:19.694043    1442 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0821 03:34:19.694047    1442 kubeadm.go:322] 
	I0821 03:34:19.694084    1442 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0821 03:34:19.694086    1442 kubeadm.go:322] 
	I0821 03:34:19.694099    1442 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0821 03:34:19.694192    1442 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0821 03:34:19.694216    1442 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0821 03:34:19.694219    1442 kubeadm.go:322] 
	I0821 03:34:19.694251    1442 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0821 03:34:19.694263    1442 kubeadm.go:322] 
	I0821 03:34:19.694293    1442 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0821 03:34:19.694296    1442 kubeadm.go:322] 
	I0821 03:34:19.694320    1442 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0821 03:34:19.694360    1442 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0821 03:34:19.694390    1442 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0821 03:34:19.694394    1442 kubeadm.go:322] 
	I0821 03:34:19.694446    1442 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0821 03:34:19.694488    1442 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0821 03:34:19.694495    1442 kubeadm.go:322] 
	I0821 03:34:19.694535    1442 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token jvxtql.8wgzhr7nb5g9o93n \
	I0821 03:34:19.694617    1442 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:c361d9930575cb4141f86c9c696a425212668e350af0245a5e7de41b1bd48407 \
	I0821 03:34:19.694632    1442 kubeadm.go:322] 	--control-plane 
	I0821 03:34:19.694634    1442 kubeadm.go:322] 
	I0821 03:34:19.694684    1442 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0821 03:34:19.694688    1442 kubeadm.go:322] 
	I0821 03:34:19.694735    1442 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token jvxtql.8wgzhr7nb5g9o93n \
	I0821 03:34:19.694782    1442 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:c361d9930575cb4141f86c9c696a425212668e350af0245a5e7de41b1bd48407 
	I0821 03:34:19.694835    1442 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0821 03:34:19.694840    1442 cni.go:84] Creating CNI manager for ""
	I0821 03:34:19.694847    1442 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0821 03:34:19.703814    1442 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0821 03:34:19.707890    1442 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0821 03:34:19.711023    1442 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0821 03:34:19.716873    1442 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0821 03:34:19.716924    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:19.716951    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43 minikube.k8s.io/name=addons-500000 minikube.k8s.io/updated_at=2023_08_21T03_34_19_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:19.723924    1442 ops.go:34] apiserver oom_adj: -16
	I0821 03:34:19.767999    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:19.814902    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:20.352169    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:20.852188    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:21.352164    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:21.852123    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:22.352346    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:22.852184    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:23.352159    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:23.852279    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:24.352116    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:24.852182    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:25.352203    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:25.852083    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:26.352293    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:26.852062    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:27.352046    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:27.851991    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:28.352173    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:28.851976    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:29.352173    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:29.851943    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:30.352016    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:30.851904    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:31.351923    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:31.851905    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:32.351835    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:32.388500    1442 kubeadm.go:1081] duration metric: took 12.671972458s to wait for elevateKubeSystemPrivileges.
	I0821 03:34:32.388516    1442 kubeadm.go:406] StartCluster complete in 20.01278175s
	I0821 03:34:32.388525    1442 settings.go:142] acquiring lock: {Name:mkeb461ec3a6a92ee32ce41e8df63d6759cb2728 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 03:34:32.388680    1442 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/17102-920/kubeconfig
	I0821 03:34:32.388902    1442 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17102-920/kubeconfig: {Name:mk2bc9c64ad130c36a0253707ac2ba3f8fd22371 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 03:34:32.389107    1442 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0821 03:34:32.389147    1442 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:true]
	I0821 03:34:32.389221    1442 addons.go:69] Setting volumesnapshots=true in profile "addons-500000"
	I0821 03:34:32.389227    1442 addons.go:231] Setting addon volumesnapshots=true in "addons-500000"
	I0821 03:34:32.389225    1442 addons.go:69] Setting cloud-spanner=true in profile "addons-500000"
	I0821 03:34:32.389236    1442 addons.go:231] Setting addon cloud-spanner=true in "addons-500000"
	I0821 03:34:32.389251    1442 config.go:182] Loaded profile config "addons-500000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0821 03:34:32.389271    1442 host.go:66] Checking if "addons-500000" exists ...
	I0821 03:34:32.389279    1442 addons.go:69] Setting storage-provisioner=true in profile "addons-500000"
	I0821 03:34:32.389222    1442 addons.go:69] Setting gcp-auth=true in profile "addons-500000"
	I0821 03:34:32.389282    1442 addons.go:231] Setting addon storage-provisioner=true in "addons-500000"
	I0821 03:34:32.389288    1442 mustload.go:65] Loading cluster: addons-500000
	I0821 03:34:32.389299    1442 host.go:66] Checking if "addons-500000" exists ...
	I0821 03:34:32.389299    1442 addons.go:69] Setting inspektor-gadget=true in profile "addons-500000"
	I0821 03:34:32.389327    1442 addons.go:69] Setting registry=true in profile "addons-500000"
	I0821 03:34:32.389360    1442 config.go:182] Loaded profile config "addons-500000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0821 03:34:32.389358    1442 addons.go:69] Setting ingress-dns=true in profile "addons-500000"
	I0821 03:34:32.389378    1442 addons.go:231] Setting addon ingress-dns=true in "addons-500000"
	I0821 03:34:32.389273    1442 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-500000"
	I0821 03:34:32.389396    1442 addons.go:69] Setting ingress=true in profile "addons-500000"
	I0821 03:34:32.389434    1442 host.go:66] Checking if "addons-500000" exists ...
	I0821 03:34:32.389418    1442 addons.go:69] Setting metrics-server=true in profile "addons-500000"
	I0821 03:34:32.389454    1442 addons.go:231] Setting addon metrics-server=true in "addons-500000"
	I0821 03:34:32.389465    1442 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-500000"
	I0821 03:34:32.389506    1442 host.go:66] Checking if "addons-500000" exists ...
	I0821 03:34:32.389519    1442 host.go:66] Checking if "addons-500000" exists ...
	I0821 03:34:32.389271    1442 host.go:66] Checking if "addons-500000" exists ...
	W0821 03:34:32.389564    1442 host.go:54] host status for "addons-500000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/monitor: connect: connection refused
	W0821 03:34:32.389572    1442 addons.go:277] "addons-500000" is not running, setting storage-provisioner=true and skipping enablement (err=<nil>)
	I0821 03:34:32.389347    1442 addons.go:231] Setting addon inspektor-gadget=true in "addons-500000"
	I0821 03:34:32.389693    1442 host.go:66] Checking if "addons-500000" exists ...
	W0821 03:34:32.389757    1442 host.go:54] host status for "addons-500000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/monitor: connect: connection refused
	W0821 03:34:32.389767    1442 addons.go:277] "addons-500000" is not running, setting cloud-spanner=true and skipping enablement (err=<nil>)
	I0821 03:34:32.389367    1442 addons.go:231] Setting addon registry=true in "addons-500000"
	I0821 03:34:32.389786    1442 host.go:66] Checking if "addons-500000" exists ...
	W0821 03:34:32.389790    1442 host.go:54] host status for "addons-500000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/monitor: connect: connection refused
	W0821 03:34:32.389796    1442 addons.go:277] "addons-500000" is not running, setting metrics-server=true and skipping enablement (err=<nil>)
	I0821 03:34:32.389799    1442 addons.go:467] Verifying addon metrics-server=true in "addons-500000"
	W0821 03:34:32.389788    1442 host.go:54] host status for "addons-500000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/monitor: connect: connection refused
	W0821 03:34:32.389803    1442 addons.go:277] "addons-500000" is not running, setting csi-hostpath-driver=true and skipping enablement (err=<nil>)
	I0821 03:34:32.389805    1442 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-500000"
	I0821 03:34:32.389275    1442 addons.go:69] Setting default-storageclass=true in profile "addons-500000"
	I0821 03:34:32.394058    1442 out.go:177] * Verifying csi-hostpath-driver addon...
	I0821 03:34:32.389436    1442 addons.go:231] Setting addon ingress=true in "addons-500000"
	I0821 03:34:32.389868    1442 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-500000"
	W0821 03:34:32.389953    1442 host.go:54] host status for "addons-500000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/monitor: connect: connection refused
	W0821 03:34:32.390033    1442 host.go:54] host status for "addons-500000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/monitor: connect: connection refused
	W0821 03:34:32.390053    1442 host.go:54] host status for "addons-500000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/monitor: connect: connection refused
	I0821 03:34:32.390510    1442 host.go:66] Checking if "addons-500000" exists ...
	I0821 03:34:32.409190    1442 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	W0821 03:34:32.404296    1442 addons.go:277] "addons-500000" is not running, setting ingress-dns=true and skipping enablement (err=<nil>)
	W0821 03:34:32.404342    1442 addons.go:277] "addons-500000" is not running, setting inspektor-gadget=true and skipping enablement (err=<nil>)
	W0821 03:34:32.404346    1442 addons.go:277] "addons-500000" is not running, setting registry=true and skipping enablement (err=<nil>)
	I0821 03:34:32.404410    1442 host.go:66] Checking if "addons-500000" exists ...
	I0821 03:34:32.404764    1442 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0821 03:34:32.413218    1442 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0821 03:34:32.413224    1442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0821 03:34:32.413232    1442 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/id_rsa Username:docker}
	I0821 03:34:32.413266    1442 addons.go:467] Verifying addon registry=true in "addons-500000"
	I0821 03:34:32.418274    1442 out.go:177] * Verifying registry addon...
	I0821 03:34:32.419795    1442 addons.go:231] Setting addon default-storageclass=true in "addons-500000"
	I0821 03:34:32.419868    1442 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-500000" context rescaled to 1 replicas
	I0821 03:34:32.420817    1442 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0821 03:34:32.421498    1442 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0821 03:34:32.421694    1442 host.go:66] Checking if "addons-500000" exists ...
	I0821 03:34:32.421701    1442 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0821 03:34:32.421849    1442 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0821 03:34:32.431173    1442 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0821 03:34:32.440212    1442 out.go:177] * Verifying Kubernetes components...
	I0821 03:34:32.431974    1442 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0821 03:34:32.435186    1442 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=registry
	I0821 03:34:32.444202    1442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0821 03:34:32.444209    1442 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0821 03:34:32.447466    1442 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0821 03:34:32.448196    1442 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.8.1
	I0821 03:34:32.448211    1442 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/id_rsa Username:docker}
	I0821 03:34:32.451292    1442 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0821 03:34:32.451299    1442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16083 bytes)
	I0821 03:34:32.451306    1442 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/id_rsa Username:docker}
	I0821 03:34:32.454351    1442 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0821 03:34:32.454358    1442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0821 03:34:32.485876    1442 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0821 03:34:32.485886    1442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0821 03:34:32.513135    1442 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0821 03:34:32.513147    1442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0821 03:34:32.532036    1442 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0821 03:34:32.532052    1442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0821 03:34:32.537566    1442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0821 03:34:32.542495    1442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0821 03:34:32.548533    1442 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0821 03:34:32.548541    1442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0821 03:34:32.568087    1442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0821 03:34:33.517324    1442 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.069159875s)
	I0821 03:34:33.517338    1442 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.069147125s)
	I0821 03:34:33.517342    1442 start.go:901] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I0821 03:34:33.517808    1442 node_ready.go:35] waiting up to 6m0s for node "addons-500000" to be "Ready" ...
	I0821 03:34:33.519592    1442 node_ready.go:49] node "addons-500000" has status "Ready":"True"
	I0821 03:34:33.519599    1442 node_ready.go:38] duration metric: took 1.779708ms waiting for node "addons-500000" to be "Ready" ...
	I0821 03:34:33.519602    1442 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0821 03:34:33.522687    1442 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-97rp7" in "kube-system" namespace to be "Ready" ...
	I0821 03:34:33.964195    1442 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.421717084s)
	I0821 03:34:33.964211    1442 addons.go:467] Verifying addon ingress=true in "addons-500000"
	I0821 03:34:33.968723    1442 out.go:177] * Verifying ingress addon...
	I0821 03:34:33.964338    1442 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.396275834s)
	W0821 03:34:33.968774    1442 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0821 03:34:33.975741    1442 retry.go:31] will retry after 231.591556ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0821 03:34:33.976141    1442 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0821 03:34:33.984299    1442 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0821 03:34:33.984307    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:33.987720    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:34.207434    1442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0821 03:34:34.491123    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:34.991180    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:35.490538    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:35.534205    1442 pod_ready.go:102] pod "coredns-5d78c9869d-97rp7" in "kube-system" namespace has status "Ready":"False"
	I0821 03:34:35.990628    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:36.490998    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:36.745839    1442 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.5384555s)
	I0821 03:34:36.990793    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:37.491119    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:37.534210    1442 pod_ready.go:102] pod "coredns-5d78c9869d-97rp7" in "kube-system" namespace has status "Ready":"False"
	I0821 03:34:37.990643    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:38.490772    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:38.997287    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:39.008172    1442 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0821 03:34:39.008186    1442 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/id_rsa Username:docker}
	I0821 03:34:39.055480    1442 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0821 03:34:39.064828    1442 addons.go:231] Setting addon gcp-auth=true in "addons-500000"
	I0821 03:34:39.064858    1442 host.go:66] Checking if "addons-500000" exists ...
	I0821 03:34:39.065649    1442 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0821 03:34:39.065660    1442 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/id_rsa Username:docker}
	I0821 03:34:39.100776    1442 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0821 03:34:39.103705    1442 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0821 03:34:39.107726    1442 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0821 03:34:39.107734    1442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0821 03:34:39.113078    1442 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0821 03:34:39.113087    1442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0821 03:34:39.127541    1442 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0821 03:34:39.127551    1442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5412 bytes)
	I0821 03:34:39.133486    1442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0821 03:34:39.491109    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:39.534694    1442 pod_ready.go:102] pod "coredns-5d78c9869d-97rp7" in "kube-system" namespace has status "Ready":"False"
	I0821 03:34:39.629710    1442 addons.go:467] Verifying addon gcp-auth=true in "addons-500000"
	I0821 03:34:39.641410    1442 out.go:177] * Verifying gcp-auth addon...
	I0821 03:34:39.650441    1442 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0821 03:34:39.656554    1442 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0821 03:34:39.656563    1442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 03:34:39.658191    1442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 03:34:39.991177    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:40.161154    1442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 03:34:40.492443    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:40.660810    1442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 03:34:40.990558    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:41.161357    1442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 03:34:41.492269    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:41.534695    1442 pod_ready.go:102] pod "coredns-5d78c9869d-97rp7" in "kube-system" namespace has status "Ready":"False"
	I0821 03:34:41.660947    1442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 03:34:41.990678    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:42.161013    1442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 03:34:42.490658    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:42.660884    1442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 03:34:42.990530    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:43.161042    1442 kapi.go:107] duration metric: took 3.510698166s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0821 03:34:43.165184    1442 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-500000 cluster.
	I0821 03:34:43.169238    1442 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0821 03:34:43.173158    1442 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0821 03:34:43.491145    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:43.534713    1442 pod_ready.go:97] pod "coredns-5d78c9869d-97rp7" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-08-21 03:34:32 -0700 PDT Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-08-21 03:34:32 -0700 PDT Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-08-21 03:34:32 -0700 PDT Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-08-21 03:34:32 -0700 PDT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.105.2 HostIPs:[] PodIP: PodIPs:[] StartTime:2023-08-21 03:34:32 -0700 PDT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerSt
ateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2023-08-21 03:34:33 -0700 PDT,FinishedAt:2023-08-21 03:34:43 -0700 PDT,ContainerID:docker://d9032391cb53f0fa8cfd4e1696eef2d7eb7096ba08423fd5087bb7b4d2fba5ed,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e ContainerID:docker://d9032391cb53f0fa8cfd4e1696eef2d7eb7096ba08423fd5087bb7b4d2fba5ed Started:0x140018d39a0 AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0821 03:34:43.534727    1442 pod_ready.go:81] duration metric: took 10.012309458s waiting for pod "coredns-5d78c9869d-97rp7" in "kube-system" namespace to be "Ready" ...
	E0821 03:34:43.534732    1442 pod_ready.go:66] WaitExtra: waitPodCondition: pod "coredns-5d78c9869d-97rp7" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-08-21 03:34:32 -0700 PDT Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-08-21 03:34:32 -0700 PDT Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-08-21 03:34:32 -0700 PDT Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-08-21 03:34:32 -0700 PDT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.105.2 HostIPs:[] PodIP: PodIPs:[] StartTime:2023-08-21 03:34:32 -0700 PDT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Runnin
g:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2023-08-21 03:34:33 -0700 PDT,FinishedAt:2023-08-21 03:34:43 -0700 PDT,ContainerID:docker://d9032391cb53f0fa8cfd4e1696eef2d7eb7096ba08423fd5087bb7b4d2fba5ed,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e ContainerID:docker://d9032391cb53f0fa8cfd4e1696eef2d7eb7096ba08423fd5087bb7b4d2fba5ed Started:0x140018d39a0 AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0821 03:34:43.534736    1442 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-hbg44" in "kube-system" namespace to be "Ready" ...
	I0821 03:34:43.537136    1442 pod_ready.go:92] pod "coredns-5d78c9869d-hbg44" in "kube-system" namespace has status "Ready":"True"
	I0821 03:34:43.537140    1442 pod_ready.go:81] duration metric: took 2.400375ms waiting for pod "coredns-5d78c9869d-hbg44" in "kube-system" namespace to be "Ready" ...
	I0821 03:34:43.537145    1442 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-500000" in "kube-system" namespace to be "Ready" ...
	I0821 03:34:43.539758    1442 pod_ready.go:92] pod "etcd-addons-500000" in "kube-system" namespace has status "Ready":"True"
	I0821 03:34:43.539762    1442 pod_ready.go:81] duration metric: took 2.614916ms waiting for pod "etcd-addons-500000" in "kube-system" namespace to be "Ready" ...
	I0821 03:34:43.539766    1442 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-500000" in "kube-system" namespace to be "Ready" ...
	I0821 03:34:43.542039    1442 pod_ready.go:92] pod "kube-apiserver-addons-500000" in "kube-system" namespace has status "Ready":"True"
	I0821 03:34:43.542045    1442 pod_ready.go:81] duration metric: took 2.276584ms waiting for pod "kube-apiserver-addons-500000" in "kube-system" namespace to be "Ready" ...
	I0821 03:34:43.542049    1442 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-500000" in "kube-system" namespace to be "Ready" ...
	I0821 03:34:43.544341    1442 pod_ready.go:92] pod "kube-controller-manager-addons-500000" in "kube-system" namespace has status "Ready":"True"
	I0821 03:34:43.544345    1442 pod_ready.go:81] duration metric: took 2.2935ms waiting for pod "kube-controller-manager-addons-500000" in "kube-system" namespace to be "Ready" ...
	I0821 03:34:43.544348    1442 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-z2wj9" in "kube-system" namespace to be "Ready" ...
	I0821 03:34:43.933736    1442 pod_ready.go:92] pod "kube-proxy-z2wj9" in "kube-system" namespace has status "Ready":"True"
	I0821 03:34:43.933748    1442 pod_ready.go:81] duration metric: took 389.407375ms waiting for pod "kube-proxy-z2wj9" in "kube-system" namespace to be "Ready" ...
	I0821 03:34:43.933752    1442 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-500000" in "kube-system" namespace to be "Ready" ...
	I0821 03:34:43.990470    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:44.334535    1442 pod_ready.go:92] pod "kube-scheduler-addons-500000" in "kube-system" namespace has status "Ready":"True"
	I0821 03:34:44.334545    1442 pod_ready.go:81] duration metric: took 400.801125ms waiting for pod "kube-scheduler-addons-500000" in "kube-system" namespace to be "Ready" ...
	I0821 03:34:44.334549    1442 pod_ready.go:38] duration metric: took 10.81524225s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0821 03:34:44.334558    1442 api_server.go:52] waiting for apiserver process to appear ...
	I0821 03:34:44.334639    1442 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0821 03:34:44.339980    1442 api_server.go:72] duration metric: took 11.909098333s to wait for apiserver process to appear ...
	I0821 03:34:44.339987    1442 api_server.go:88] waiting for apiserver healthz status ...
	I0821 03:34:44.339993    1442 api_server.go:253] Checking apiserver healthz at https://192.168.105.2:8443/healthz ...
	I0821 03:34:44.344178    1442 api_server.go:279] https://192.168.105.2:8443/healthz returned 200:
	ok
	I0821 03:34:44.344920    1442 api_server.go:141] control plane version: v1.27.4
	I0821 03:34:44.344925    1442 api_server.go:131] duration metric: took 4.936ms to wait for apiserver health ...
	I0821 03:34:44.344929    1442 system_pods.go:43] waiting for kube-system pods to appear ...
	I0821 03:34:44.490452    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:44.535983    1442 system_pods.go:59] 8 kube-system pods found
	I0821 03:34:44.535991    1442 system_pods.go:61] "coredns-5d78c9869d-hbg44" [2212048e-385c-4235-ad14-1b9e4e812106] Running
	I0821 03:34:44.535994    1442 system_pods.go:61] "etcd-addons-500000" [dcde2eed-b2a3-4b2d-af51-14d42189714c] Running
	I0821 03:34:44.536011    1442 system_pods.go:61] "kube-apiserver-addons-500000" [a4c38aeb-a7ef-4239-ac34-2437f9c67d96] Running
	I0821 03:34:44.536015    1442 system_pods.go:61] "kube-controller-manager-addons-500000" [972b1e42-cd56-4f77-ad52-a1df2b79fdae] Running
	I0821 03:34:44.536018    1442 system_pods.go:61] "kube-proxy-z2wj9" [56cdd0e9-2b8f-476e-be08-a52381eecb16] Running
	I0821 03:34:44.536020    1442 system_pods.go:61] "kube-scheduler-addons-500000" [c2d2f1e5-45c6-48a9-990d-7e32d9d75976] Running
	I0821 03:34:44.536022    1442 system_pods.go:61] "snapshot-controller-75bbb956b9-4pgqh" [7452ce04-2fbb-4f7a-9e5f-87b8b577fc94] Running
	I0821 03:34:44.536025    1442 system_pods.go:61] "snapshot-controller-75bbb956b9-j9mkf" [dbd2a297-29a5-4435-8fb1-849d8ae91771] Running
	I0821 03:34:44.536028    1442 system_pods.go:74] duration metric: took 191.1015ms to wait for pod list to return data ...
	I0821 03:34:44.536033    1442 default_sa.go:34] waiting for default service account to be created ...
	I0821 03:34:44.734042    1442 default_sa.go:45] found service account: "default"
	I0821 03:34:44.734051    1442 default_sa.go:55] duration metric: took 198.020583ms for default service account to be created ...
	I0821 03:34:44.734055    1442 system_pods.go:116] waiting for k8s-apps to be running ...
	I0821 03:34:44.935348    1442 system_pods.go:86] 8 kube-system pods found
	I0821 03:34:44.935359    1442 system_pods.go:89] "coredns-5d78c9869d-hbg44" [2212048e-385c-4235-ad14-1b9e4e812106] Running
	I0821 03:34:44.935362    1442 system_pods.go:89] "etcd-addons-500000" [dcde2eed-b2a3-4b2d-af51-14d42189714c] Running
	I0821 03:34:44.935365    1442 system_pods.go:89] "kube-apiserver-addons-500000" [a4c38aeb-a7ef-4239-ac34-2437f9c67d96] Running
	I0821 03:34:44.935367    1442 system_pods.go:89] "kube-controller-manager-addons-500000" [972b1e42-cd56-4f77-ad52-a1df2b79fdae] Running
	I0821 03:34:44.935369    1442 system_pods.go:89] "kube-proxy-z2wj9" [56cdd0e9-2b8f-476e-be08-a52381eecb16] Running
	I0821 03:34:44.935372    1442 system_pods.go:89] "kube-scheduler-addons-500000" [c2d2f1e5-45c6-48a9-990d-7e32d9d75976] Running
	I0821 03:34:44.935374    1442 system_pods.go:89] "snapshot-controller-75bbb956b9-4pgqh" [7452ce04-2fbb-4f7a-9e5f-87b8b577fc94] Running
	I0821 03:34:44.935376    1442 system_pods.go:89] "snapshot-controller-75bbb956b9-j9mkf" [dbd2a297-29a5-4435-8fb1-849d8ae91771] Running
	I0821 03:34:44.935380    1442 system_pods.go:126] duration metric: took 201.327917ms to wait for k8s-apps to be running ...
	I0821 03:34:44.935391    1442 system_svc.go:44] waiting for kubelet service to be running ....
	I0821 03:34:44.935475    1442 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0821 03:34:44.941643    1442 system_svc.go:56] duration metric: took 6.252209ms WaitForService to wait for kubelet.
	I0821 03:34:44.941651    1442 kubeadm.go:581] duration metric: took 12.5107865s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0821 03:34:44.941660    1442 node_conditions.go:102] verifying NodePressure condition ...
	I0821 03:34:44.990746    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:45.134674    1442 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0821 03:34:45.134706    1442 node_conditions.go:123] node cpu capacity is 2
	I0821 03:34:45.134712    1442 node_conditions.go:105] duration metric: took 193.055083ms to run NodePressure ...
	I0821 03:34:45.134717    1442 start.go:228] waiting for startup goroutines ...
	I0821 03:34:45.490470    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:45.990643    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:46.490327    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:46.990587    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:47.490536    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:47.990358    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:48.490279    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:48.990490    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:49.490328    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:49.990414    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:50.490337    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:50.990260    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:51.490639    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:51.989843    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:52.490813    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:52.990112    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:53.491005    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:53.992627    1442 kapi.go:107] duration metric: took 20.017033875s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0821 03:40:32.405313    1442 kapi.go:107] duration metric: took 6m0.010490834s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	W0821 03:40:32.405643    1442 out.go:239] ! Enabling 'csi-hostpath-driver' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=csi-hostpath-driver pods: context deadline exceeded]
	I0821 03:40:32.421828    1442 kapi.go:107] duration metric: took 6m0.009978583s to wait for kubernetes.io/minikube-addons=registry ...
	W0821 03:40:32.421921    1442 out.go:239] ! Enabling 'registry' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=registry pods: context deadline exceeded]
	I0821 03:40:32.430174    1442 out.go:177] * Enabled addons: storage-provisioner, cloud-spanner, metrics-server, ingress-dns, inspektor-gadget, default-storageclass, volumesnapshots, gcp-auth, ingress
	I0821 03:40:32.437176    1442 addons.go:502] enable addons completed in 6m0.058033333s: enabled=[storage-provisioner cloud-spanner metrics-server ingress-dns inspektor-gadget default-storageclass volumesnapshots gcp-auth ingress]
	I0821 03:40:32.437214    1442 start.go:233] waiting for cluster config update ...
	I0821 03:40:32.437252    1442 start.go:242] writing updated cluster config ...
	I0821 03:40:32.438394    1442 ssh_runner.go:195] Run: rm -f paused
	I0821 03:40:32.505190    1442 start.go:600] kubectl: 1.27.2, cluster: 1.27.4 (minor skew: 0)
	I0821 03:40:32.509248    1442 out.go:177] * Done! kubectl is now configured to use "addons-500000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Mon 2023-08-21 10:34:00 UTC, ends at Mon 2023-08-21 11:02:25 UTC. --
	Aug 21 10:34:41 addons-500000 dockerd[1153]: time="2023-08-21T10:34:41.956624254Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 21 10:34:42 addons-500000 cri-dockerd[1049]: time="2023-08-21T10:34:42Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/bbb4a4c960656b62bb19b9b067c655ea39e12d8756d8701729b8421b997616a1/resolv.conf as [nameserver 10.96.0.10 search ingress-nginx.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Aug 21 10:34:42 addons-500000 cri-dockerd[1049]: time="2023-08-21T10:34:42Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf: Status: Downloaded newer image for gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf"
	Aug 21 10:34:42 addons-500000 dockerd[1148]: time="2023-08-21T10:34:42.514519077Z" level=warning msg="reference for unknown type: " digest="sha256:e5c4824e7375fcf2a393e1c03c293b69759af37a9ca6abdb91b13d78a93da8bd" remote="registry.k8s.io/ingress-nginx/controller@sha256:e5c4824e7375fcf2a393e1c03c293b69759af37a9ca6abdb91b13d78a93da8bd"
	Aug 21 10:34:42 addons-500000 dockerd[1153]: time="2023-08-21T10:34:42.565577154Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 21 10:34:42 addons-500000 dockerd[1153]: time="2023-08-21T10:34:42.565634689Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 21 10:34:42 addons-500000 dockerd[1153]: time="2023-08-21T10:34:42.565652592Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 21 10:34:42 addons-500000 dockerd[1153]: time="2023-08-21T10:34:42.565663687Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 21 10:34:43 addons-500000 dockerd[1153]: time="2023-08-21T10:34:43.460515395Z" level=info msg="shim disconnected" id=d9032391cb53f0fa8cfd4e1696eef2d7eb7096ba08423fd5087bb7b4d2fba5ed namespace=moby
	Aug 21 10:34:43 addons-500000 dockerd[1153]: time="2023-08-21T10:34:43.460544530Z" level=warning msg="cleaning up after shim disconnected" id=d9032391cb53f0fa8cfd4e1696eef2d7eb7096ba08423fd5087bb7b4d2fba5ed namespace=moby
	Aug 21 10:34:43 addons-500000 dockerd[1153]: time="2023-08-21T10:34:43.460548812Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 21 10:34:43 addons-500000 dockerd[1148]: time="2023-08-21T10:34:43.460463883Z" level=info msg="ignoring event" container=d9032391cb53f0fa8cfd4e1696eef2d7eb7096ba08423fd5087bb7b4d2fba5ed module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 21 10:34:43 addons-500000 dockerd[1153]: time="2023-08-21T10:34:43.550734250Z" level=info msg="shim disconnected" id=3c57b48b5f08f4ead2c53d0b29e10a8a3dc35318069e85faa762b9ff0597901d namespace=moby
	Aug 21 10:34:43 addons-500000 dockerd[1148]: time="2023-08-21T10:34:43.550868047Z" level=info msg="ignoring event" container=3c57b48b5f08f4ead2c53d0b29e10a8a3dc35318069e85faa762b9ff0597901d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 21 10:34:43 addons-500000 dockerd[1153]: time="2023-08-21T10:34:43.550901548Z" level=warning msg="cleaning up after shim disconnected" id=3c57b48b5f08f4ead2c53d0b29e10a8a3dc35318069e85faa762b9ff0597901d namespace=moby
	Aug 21 10:34:43 addons-500000 dockerd[1153]: time="2023-08-21T10:34:43.550916158Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 21 10:34:52 addons-500000 cri-dockerd[1049]: time="2023-08-21T10:34:52Z" level=info msg="Pulling image registry.k8s.io/ingress-nginx/controller:v1.8.1@sha256:e5c4824e7375fcf2a393e1c03c293b69759af37a9ca6abdb91b13d78a93da8bd: df2bdb71e370: Extracting [=====================================>             ]  8.782MB/11.56MB"
	Aug 21 10:34:52 addons-500000 dockerd[1148]: time="2023-08-21T10:34:52.972147755Z" level=warning msg="ignored xattrs in archive: underlying filesystem doesn't support them" errors="[operation not supported]"
	Aug 21 10:34:52 addons-500000 dockerd[1148]: time="2023-08-21T10:34:52.973540499Z" level=warning msg="ignored xattrs in archive: underlying filesystem doesn't support them" errors="[operation not supported]"
	Aug 21 10:34:53 addons-500000 dockerd[1148]: time="2023-08-21T10:34:53.079609792Z" level=warning msg="ignored xattrs in archive: underlying filesystem doesn't support them" errors="[operation not supported]"
	Aug 21 10:34:53 addons-500000 cri-dockerd[1049]: time="2023-08-21T10:34:53Z" level=info msg="Stop pulling image registry.k8s.io/ingress-nginx/controller:v1.8.1@sha256:e5c4824e7375fcf2a393e1c03c293b69759af37a9ca6abdb91b13d78a93da8bd: Status: Downloaded newer image for registry.k8s.io/ingress-nginx/controller@sha256:e5c4824e7375fcf2a393e1c03c293b69759af37a9ca6abdb91b13d78a93da8bd"
	Aug 21 10:34:53 addons-500000 dockerd[1153]: time="2023-08-21T10:34:53.201046831Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 21 10:34:53 addons-500000 dockerd[1153]: time="2023-08-21T10:34:53.201094050Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 21 10:34:53 addons-500000 dockerd[1153]: time="2023-08-21T10:34:53.201110708Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 21 10:34:53 addons-500000 dockerd[1153]: time="2023-08-21T10:34:53.201117263Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                         ATTEMPT             POD ID
	734d7d69c9e8b       registry.k8s.io/ingress-nginx/controller@sha256:e5c4824e7375fcf2a393e1c03c293b69759af37a9ca6abdb91b13d78a93da8bd             27 minutes ago      Running             controller                   0                   bbb4a4c960656
	dbe5746b118a6       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf                 27 minutes ago      Running             gcp-auth                     0                   31154fc41fc35
	fc5767357c5d9       8f2588812ab29                                                                                                                27 minutes ago      Exited              patch                        1                   0538e79b5c883
	aa7d89a7d68d0       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:543c40fd093964bc9ab509d3e791f9989963021f1e9e4c9c7b6700b02bfb227b   27 minutes ago      Exited              create                       0                   3c078f4b9885e
	7979593c9bb52       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280      27 minutes ago      Running             volume-snapshot-controller   0                   70a68685a69fb
	fe9609fabef21       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280      27 minutes ago      Running             volume-snapshot-controller   0                   39eda7944d576
	16cfb4c805080       97e04611ad434                                                                                                                27 minutes ago      Running             coredns                      0                   b6fa8f87ea743
	36558206e7ebf       532e5a30e948f                                                                                                                27 minutes ago      Running             kube-proxy                   0                   ccc8633d52ca6
	bd48baf71b163       6eb63895cb67f                                                                                                                28 minutes ago      Running             kube-scheduler               0                   65c9ea48d27ae
	27dc2c0d7a4a5       24bc64e911039                                                                                                                28 minutes ago      Running             etcd                         0                   0f2cdc52bbda6
	dc949a6ce14c1       64aece92d6bde                                                                                                                28 minutes ago      Running             kube-apiserver               0                   090daa0e10080
	41982c5e9fc8f       389f6f052cf83                                                                                                                28 minutes ago      Running             kube-controller-manager      0                   a9c3d15b86bf8
	
	* 
	* ==> controller_ingress [734d7d69c9e8] <==
	*   Build:         dc88dce9ea5e700f3301d16f971fa17c6cfe757d
	  Repository:    https://github.com/kubernetes/ingress-nginx
	  nginx version: nginx/1.21.6
	
	-------------------------------------------------------------------------------
	
	W0821 10:34:53.255429       6 client_config.go:618] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
	I0821 10:34:53.255517       6 main.go:209] "Creating API client" host="https://10.96.0.1:443"
	I0821 10:34:53.259720       6 main.go:253] "Running in Kubernetes cluster" major="1" minor="27" git="v1.27.4" state="clean" commit="fa3d7990104d7c1f16943a67f11b154b71f6a132" platform="linux/arm64"
	I0821 10:34:53.370154       6 main.go:104] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
	I0821 10:34:53.376568       6 ssl.go:533] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
	I0821 10:34:53.385083       6 nginx.go:261] "Starting NGINX Ingress controller"
	I0821 10:34:53.389190       6 event.go:285] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"5b999e5a-759f-47c2-858b-4e3d79b34cbe", APIVersion:"v1", ResourceVersion:"433", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
	I0821 10:34:53.391567       6 event.go:285] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"a91d48bb-075d-496f-a947-fa3bf3c2ef7e", APIVersion:"v1", ResourceVersion:"434", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
	I0821 10:34:53.391592       6 event.go:285] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"5124232c-77f2-4a7f-a11f-9600873ca980", APIVersion:"v1", ResourceVersion:"435", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
	I0821 10:34:54.586254       6 nginx.go:304] "Starting NGINX process"
	I0821 10:34:54.586524       6 leaderelection.go:248] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
	I0821 10:34:54.587191       6 nginx.go:324] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
	I0821 10:34:54.588124       6 controller.go:190] "Configuration changes detected, backend reload required"
	I0821 10:34:54.605898       6 leaderelection.go:258] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I0821 10:34:54.606668       6 status.go:84] "New leader elected" identity="ingress-nginx-controller-7799c6795f-4ppd9"
	I0821 10:34:54.622098       6 status.go:215] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-7799c6795f-4ppd9" node="addons-500000"
	I0821 10:34:54.663825       6 controller.go:207] "Backend successfully reloaded"
	I0821 10:34:54.663941       6 controller.go:218] "Initial sync, sleeping for 1 second"
	I0821 10:34:54.664013       6 event.go:285] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7799c6795f-4ppd9", UID:"c950764c-9601-4c76-adb3-ddb61bd6335d", APIVersion:"v1", ResourceVersion:"458", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	
	* 
	* ==> coredns [16cfb4c80508] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	[INFO] Reloading complete
	[INFO] 127.0.0.1:52450 - 49271 "HINFO IN 1467224369207536570.5830207891825585757. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.005303742s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-500000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-500000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43
	                    minikube.k8s.io/name=addons-500000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_08_21T03_34_19_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 21 Aug 2023 10:34:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-500000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 21 Aug 2023 11:02:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 21 Aug 2023 11:00:55 +0000   Mon, 21 Aug 2023 10:34:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 21 Aug 2023 11:00:55 +0000   Mon, 21 Aug 2023 10:34:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 21 Aug 2023 11:00:55 +0000   Mon, 21 Aug 2023 10:34:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 21 Aug 2023 11:00:55 +0000   Mon, 21 Aug 2023 10:34:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.2
	  Hostname:    addons-500000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	System Info:
	  Machine ID:                 0e4a1f71467c44c8a10eca186773afe2
	  System UUID:                0e4a1f71467c44c8a10eca186773afe2
	  Boot ID:                    6d5e7ffc-fb7d-41fe-b076-69fd8535d300
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.4
	  Kubelet Version:            v1.27.4
	  Kube-Proxy Version:         v1.27.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  gcp-auth                    gcp-auth-58478865f7-zcg47                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  ingress-nginx               ingress-nginx-controller-7799c6795f-4ppd9    100m (5%!)(MISSING)     0 (0%!)(MISSING)      90Mi (2%!)(MISSING)        0 (0%!)(MISSING)         27m
	  kube-system                 coredns-5d78c9869d-hbg44                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     27m
	  kube-system                 etcd-addons-500000                           100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 kube-apiserver-addons-500000                 250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-controller-manager-addons-500000        200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-proxy-z2wj9                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-scheduler-addons-500000                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 snapshot-controller-75bbb956b9-4pgqh         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 snapshot-controller-75bbb956b9-j9mkf         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  0 (0%!)(MISSING)
	  memory             260Mi (6%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 27m   kube-proxy       
	  Normal  Starting                 28m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  28m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  28m   kubelet          Node addons-500000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28m   kubelet          Node addons-500000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28m   kubelet          Node addons-500000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                28m   kubelet          Node addons-500000 status is now: NodeReady
	  Normal  RegisteredNode           27m   node-controller  Node addons-500000 event: Registered Node addons-500000 in Controller
	
	* 
	* ==> dmesg <==
	* [Aug21 10:33] ACPI: SRAT not present
	[  +0.000000] KASLR disabled due to lack of seed
	[  +0.638012] EINJ: EINJ table not found.
	[  +0.490829] systemd-fstab-generator[117]: Ignoring "noauto" for root device
	[  +0.044680] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000871] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[Aug21 10:34] systemd-fstab-generator[479]: Ignoring "noauto" for root device
	[  +0.063431] systemd-fstab-generator[490]: Ignoring "noauto" for root device
	[  +0.413293] systemd-fstab-generator[750]: Ignoring "noauto" for root device
	[  +0.194883] systemd-fstab-generator[786]: Ignoring "noauto" for root device
	[  +0.079334] systemd-fstab-generator[797]: Ignoring "noauto" for root device
	[  +0.075319] systemd-fstab-generator[810]: Ignoring "noauto" for root device
	[  +1.241580] systemd-fstab-generator[968]: Ignoring "noauto" for root device
	[  +0.080868] systemd-fstab-generator[979]: Ignoring "noauto" for root device
	[  +0.070572] systemd-fstab-generator[990]: Ignoring "noauto" for root device
	[  +0.067357] systemd-fstab-generator[1001]: Ignoring "noauto" for root device
	[  +0.069942] systemd-fstab-generator[1042]: Ignoring "noauto" for root device
	[  +2.503453] systemd-fstab-generator[1141]: Ignoring "noauto" for root device
	[  +2.381640] kauditd_printk_skb: 53 callbacks suppressed
	[  +2.661766] systemd-fstab-generator[1457]: Ignoring "noauto" for root device
	[  +5.156537] systemd-fstab-generator[2350]: Ignoring "noauto" for root device
	[ +13.738428] kauditd_printk_skb: 41 callbacks suppressed
	[  +1.700338] TCP: eth0: Driver has suspect GRO implementation, TCP performance may be compromised.
	[  +4.800757] kauditd_printk_skb: 48 callbacks suppressed
	[ +14.143799] kauditd_printk_skb: 54 callbacks suppressed
	
	* 
	* ==> etcd [27dc2c0d7a4a] <==
	* {"level":"info","ts":"2023-08-21T10:34:15.986Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 became leader at term 2"}
	{"level":"info","ts":"2023-08-21T10:34:15.986Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c46d288d2fcb0590 elected leader c46d288d2fcb0590 at term 2"}
	{"level":"info","ts":"2023-08-21T10:34:15.991Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"c46d288d2fcb0590","local-member-attributes":"{Name:addons-500000 ClientURLs:[https://192.168.105.2:2379]}","request-path":"/0/members/c46d288d2fcb0590/attributes","cluster-id":"6e03e7863b4f9c54","publish-timeout":"7s"}
	{"level":"info","ts":"2023-08-21T10:34:15.991Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-21T10:34:15.991Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-08-21T10:34:15.991Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-08-21T10:34:15.992Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-08-21T10:34:16.003Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-08-21T10:34:15.992Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-08-21T10:34:16.003Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.105.2:2379"}
	{"level":"info","ts":"2023-08-21T10:34:15.992Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6e03e7863b4f9c54","local-member-id":"c46d288d2fcb0590","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-21T10:34:16.003Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-21T10:34:16.003Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-21T10:44:16.025Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":841}
	{"level":"info","ts":"2023-08-21T10:44:16.028Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":841,"took":"2.672822ms","hash":3376273956}
	{"level":"info","ts":"2023-08-21T10:44:16.028Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3376273956,"revision":841,"compact-revision":-1}
	{"level":"info","ts":"2023-08-21T10:49:16.035Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1031}
	{"level":"info","ts":"2023-08-21T10:49:16.038Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1031,"took":"1.375633ms","hash":1895539758}
	{"level":"info","ts":"2023-08-21T10:49:16.038Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1895539758,"revision":1031,"compact-revision":841}
	{"level":"info","ts":"2023-08-21T10:54:16.045Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1222}
	{"level":"info","ts":"2023-08-21T10:54:16.047Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1222,"took":"1.459351ms","hash":3279763987}
	{"level":"info","ts":"2023-08-21T10:54:16.047Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3279763987,"revision":1222,"compact-revision":1031}
	{"level":"info","ts":"2023-08-21T10:59:16.058Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1413}
	{"level":"info","ts":"2023-08-21T10:59:16.061Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1413,"took":"1.488371ms","hash":1268235317}
	{"level":"info","ts":"2023-08-21T10:59:16.061Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1268235317,"revision":1413,"compact-revision":1222}
	
	* 
	* ==> gcp-auth [dbe5746b118a] <==
	* 2023/08/21 10:34:42 GCP Auth Webhook started!
	
	* 
	* ==> kernel <==
	*  11:02:26 up 28 min,  0 users,  load average: 0.28, 0.36, 0.31
	Linux addons-500000 5.10.57 #1 SMP PREEMPT Fri Jul 14 22:49:12 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [dc949a6ce14c] <==
	* I0821 10:39:16.759360       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0821 10:44:16.754789       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0821 10:44:16.754844       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0821 10:44:16.754880       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0821 10:44:16.754904       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0821 10:44:16.755317       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0821 10:44:16.755352       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0821 10:49:16.748790       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0821 10:49:16.749408       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0821 10:49:16.759393       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0821 10:49:16.759510       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0821 10:49:16.766063       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0821 10:49:16.766169       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0821 10:54:16.749624       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0821 10:54:16.750123       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0821 10:54:16.755478       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0821 10:54:16.755644       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0821 10:54:16.765351       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0821 10:54:16.765428       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0821 10:59:16.750519       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0821 10:59:16.751153       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0821 10:59:16.751904       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0821 10:59:16.752113       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0821 10:59:16.761892       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0821 10:59:16.761965       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	
	* 
	* ==> kube-controller-manager [41982c5e9fc8] <==
	* I0821 10:34:42.731971       1 job_controller.go:523] enqueueing job ingress-nginx/ingress-nginx-admission-patch
	I0821 10:34:42.736066       1 job_controller.go:523] enqueueing job ingress-nginx/ingress-nginx-admission-patch
	I0821 10:34:42.737082       1 event.go:307] "Event occurred" object="ingress-nginx/ingress-nginx-admission-patch" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0821 10:34:42.747456       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0821 10:34:42.752783       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0821 10:34:42.756485       1 job_controller.go:523] enqueueing job ingress-nginx/ingress-nginx-admission-patch
	I0821 10:34:42.854473       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0821 10:34:42.856753       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0821 10:34:42.858553       1 event.go:307] "Event occurred" object="gcp-auth/gcp-auth-certs-patch" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0821 10:34:42.858609       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0821 10:34:42.859646       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0821 10:34:42.893612       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0821 10:34:42.895861       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0821 10:34:42.897862       1 event.go:307] "Event occurred" object="gcp-auth/gcp-auth-certs-create" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0821 10:34:42.897954       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0821 10:34:42.899189       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0821 10:35:01.688712       1 resource_quota_monitor.go:223] "QuotaMonitor created object count evaluator" resource="volumesnapshots.snapshot.storage.k8s.io"
	I0821 10:35:01.688853       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0821 10:35:01.789717       1 shared_informer.go:318] Caches are synced for resource quota
	I0821 10:35:02.109377       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0821 10:35:02.210585       1 shared_informer.go:318] Caches are synced for garbage collector
	I0821 10:35:12.010356       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0821 10:35:12.011197       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0821 10:35:12.022044       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0821 10:35:12.024702       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	
	* 
	* ==> kube-proxy [36558206e7eb] <==
	* I0821 10:34:32.961845       1 node.go:141] Successfully retrieved node IP: 192.168.105.2
	I0821 10:34:32.961903       1 server_others.go:110] "Detected node IP" address="192.168.105.2"
	I0821 10:34:32.961922       1 server_others.go:554] "Using iptables proxy"
	I0821 10:34:32.984111       1 server_others.go:178] "kube-proxy running in single-stack mode: secondary ipFamily is not supported" ipFamily=IPv6
	I0821 10:34:32.984124       1 server_others.go:192] "Using iptables Proxier"
	I0821 10:34:32.984147       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0821 10:34:32.984347       1 server.go:658] "Version info" version="v1.27.4"
	I0821 10:34:32.984357       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0821 10:34:32.984958       1 config.go:315] "Starting node config controller"
	I0821 10:34:32.984965       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0821 10:34:32.985291       1 config.go:188] "Starting service config controller"
	I0821 10:34:32.985295       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0821 10:34:32.985301       1 config.go:97] "Starting endpoint slice config controller"
	I0821 10:34:32.985318       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0821 10:34:33.085576       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0821 10:34:33.085604       1 shared_informer.go:318] Caches are synced for node config
	I0821 10:34:33.085608       1 shared_informer.go:318] Caches are synced for service config
	
	* 
	* ==> kube-scheduler [bd48baf71b16] <==
	* W0821 10:34:16.768490       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0821 10:34:16.768493       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0821 10:34:16.768508       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0821 10:34:16.768511       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0821 10:34:16.768562       1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0821 10:34:16.768566       1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0821 10:34:17.606010       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0821 10:34:17.606029       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0821 10:34:17.645166       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0821 10:34:17.645193       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0821 10:34:17.674598       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0821 10:34:17.674623       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0821 10:34:17.707767       1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0821 10:34:17.707781       1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0821 10:34:17.724040       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0821 10:34:17.724057       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0821 10:34:17.728085       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0821 10:34:17.728146       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0821 10:34:17.756871       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0821 10:34:17.756889       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0821 10:34:17.785527       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0821 10:34:17.785576       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0821 10:34:17.785527       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0821 10:34:17.785647       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0821 10:34:20.949364       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-08-21 10:34:00 UTC, ends at Mon 2023-08-21 11:02:26 UTC. --
	Aug 21 10:57:19 addons-500000 kubelet[2369]: E0821 10:57:19.566647    2369 iptables.go:575] "Could not set up iptables canary" err=<
	Aug 21 10:57:19 addons-500000 kubelet[2369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 21 10:57:19 addons-500000 kubelet[2369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 21 10:57:19 addons-500000 kubelet[2369]:  > table=nat chain=KUBE-KUBELET-CANARY
	Aug 21 10:58:19 addons-500000 kubelet[2369]: E0821 10:58:19.563799    2369 iptables.go:575] "Could not set up iptables canary" err=<
	Aug 21 10:58:19 addons-500000 kubelet[2369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 21 10:58:19 addons-500000 kubelet[2369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 21 10:58:19 addons-500000 kubelet[2369]:  > table=nat chain=KUBE-KUBELET-CANARY
	Aug 21 10:59:19 addons-500000 kubelet[2369]: W0821 10:59:19.453422    2369 machine.go:65] Cannot read vendor id correctly, set empty.
	Aug 21 10:59:19 addons-500000 kubelet[2369]: E0821 10:59:19.563594    2369 iptables.go:575] "Could not set up iptables canary" err=<
	Aug 21 10:59:19 addons-500000 kubelet[2369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 21 10:59:19 addons-500000 kubelet[2369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 21 10:59:19 addons-500000 kubelet[2369]:  > table=nat chain=KUBE-KUBELET-CANARY
	Aug 21 11:00:19 addons-500000 kubelet[2369]: E0821 11:00:19.564930    2369 iptables.go:575] "Could not set up iptables canary" err=<
	Aug 21 11:00:19 addons-500000 kubelet[2369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 21 11:00:19 addons-500000 kubelet[2369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 21 11:00:19 addons-500000 kubelet[2369]:  > table=nat chain=KUBE-KUBELET-CANARY
	Aug 21 11:01:19 addons-500000 kubelet[2369]: E0821 11:01:19.566020    2369 iptables.go:575] "Could not set up iptables canary" err=<
	Aug 21 11:01:19 addons-500000 kubelet[2369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 21 11:01:19 addons-500000 kubelet[2369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 21 11:01:19 addons-500000 kubelet[2369]:  > table=nat chain=KUBE-KUBELET-CANARY
	Aug 21 11:02:19 addons-500000 kubelet[2369]: E0821 11:02:19.567176    2369 iptables.go:575] "Could not set up iptables canary" err=<
	Aug 21 11:02:19 addons-500000 kubelet[2369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 21 11:02:19 addons-500000 kubelet[2369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 21 11:02:19 addons-500000 kubelet[2369]:  > table=nat chain=KUBE-KUBELET-CANARY
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p addons-500000 -n addons-500000
helpers_test.go:261: (dbg) Run:  kubectl --context addons-500000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-cxgb2 ingress-nginx-admission-patch-fkwhp
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/InspektorGadget]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-500000 describe pod ingress-nginx-admission-create-cxgb2 ingress-nginx-admission-patch-fkwhp
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-500000 describe pod ingress-nginx-admission-create-cxgb2 ingress-nginx-admission-patch-fkwhp: exit status 1 (37.790042ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-cxgb2" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-fkwhp" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-500000 describe pod ingress-nginx-admission-create-cxgb2 ingress-nginx-admission-patch-fkwhp: exit status 1
--- FAIL: TestAddons/parallel/InspektorGadget (480.94s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (720.9s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:381: failed waiting for metrics-server deployment to stabilize: timed out waiting for the condition
addons_test.go:383: metrics-server stabilized in 6m0.002173708s
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
addons_test.go:385: ***** TestAddons/parallel/MetricsServer: pod "k8s-app=metrics-server" failed to start within 6m0s: context deadline exceeded ****
addons_test.go:385: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p addons-500000 -n addons-500000
addons_test.go:385: TestAddons/parallel/MetricsServer: showing logs for failed pods as of 2023-08-21 04:04:33.64392 -0700 PDT m=+1878.612039709
addons_test.go:386: failed waiting for k8s-app=metrics-server pod: k8s-app=metrics-server within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p addons-500000 -n addons-500000
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p addons-500000 logs -n 25
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-670000 | jenkins | v1.31.2 | 21 Aug 23 03:33 PDT |                     |
	|         | -p download-only-670000           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=qemu2                    |                      |         |         |                     |                     |
	| start   | -o=json --download-only           | download-only-670000 | jenkins | v1.31.2 | 21 Aug 23 03:33 PDT |                     |
	|         | -p download-only-670000           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.4      |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=qemu2                    |                      |         |         |                     |                     |
	| start   | -o=json --download-only           | download-only-670000 | jenkins | v1.31.2 | 21 Aug 23 03:33 PDT |                     |
	|         | -p download-only-670000           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.0-rc.1 |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=qemu2                    |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.31.2 | 21 Aug 23 03:33 PDT | 21 Aug 23 03:33 PDT |
	| delete  | -p download-only-670000           | download-only-670000 | jenkins | v1.31.2 | 21 Aug 23 03:33 PDT | 21 Aug 23 03:33 PDT |
	| delete  | -p download-only-670000           | download-only-670000 | jenkins | v1.31.2 | 21 Aug 23 03:33 PDT | 21 Aug 23 03:33 PDT |
	| start   | --download-only -p                | binary-mirror-462000 | jenkins | v1.31.2 | 21 Aug 23 03:33 PDT |                     |
	|         | binary-mirror-462000              |                      |         |         |                     |                     |
	|         | --alsologtostderr                 |                      |         |         |                     |                     |
	|         | --binary-mirror                   |                      |         |         |                     |                     |
	|         | http://127.0.0.1:49329            |                      |         |         |                     |                     |
	|         | --driver=qemu2                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-462000           | binary-mirror-462000 | jenkins | v1.31.2 | 21 Aug 23 03:33 PDT | 21 Aug 23 03:33 PDT |
	| start   | -p addons-500000                  | addons-500000        | jenkins | v1.31.2 | 21 Aug 23 03:33 PDT | 21 Aug 23 03:40 PDT |
	|         | --wait=true --memory=4000         |                      |         |         |                     |                     |
	|         | --alsologtostderr                 |                      |         |         |                     |                     |
	|         | --addons=registry                 |                      |         |         |                     |                     |
	|         | --addons=metrics-server           |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots          |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver      |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                 |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner            |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget         |                      |         |         |                     |                     |
	|         | --driver=qemu2                    |                      |         |         |                     |                     |
	|         | --addons=ingress                  |                      |         |         |                     |                     |
	|         | --addons=ingress-dns              |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p          | addons-500000        | jenkins | v1.31.2 | 21 Aug 23 03:52 PDT |                     |
	|         | addons-500000                     |                      |         |         |                     |                     |
	| ssh     | addons-500000 ssh curl -s         | addons-500000        | jenkins | v1.31.2 | 21 Aug 23 04:02 PDT | 21 Aug 23 04:02 PDT |
	|         | http://127.0.0.1/ -H 'Host:       |                      |         |         |                     |                     |
	|         | nginx.example.com'                |                      |         |         |                     |                     |
	| ip      | addons-500000 ip                  | addons-500000        | jenkins | v1.31.2 | 21 Aug 23 04:02 PDT | 21 Aug 23 04:02 PDT |
	| addons  | addons-500000 addons disable      | addons-500000        | jenkins | v1.31.2 | 21 Aug 23 04:02 PDT |                     |
	|         | ingress-dns --alsologtostderr     |                      |         |         |                     |                     |
	|         | -v=1                              |                      |         |         |                     |                     |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/21 03:33:48
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.20.7 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0821 03:33:48.415064    1442 out.go:296] Setting OutFile to fd 1 ...
	I0821 03:33:48.415176    1442 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 03:33:48.415179    1442 out.go:309] Setting ErrFile to fd 2...
	I0821 03:33:48.415182    1442 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 03:33:48.415284    1442 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17102-920/.minikube/bin
	I0821 03:33:48.416485    1442 out.go:303] Setting JSON to false
	I0821 03:33:48.431675    1442 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":202,"bootTime":1692613826,"procs":392,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0821 03:33:48.431757    1442 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0821 03:33:48.436776    1442 out.go:177] * [addons-500000] minikube v1.31.2 on Darwin 13.5 (arm64)
	I0821 03:33:48.443786    1442 out.go:177]   - MINIKUBE_LOCATION=17102
	I0821 03:33:48.443817    1442 notify.go:220] Checking for updates...
	I0821 03:33:48.452754    1442 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17102-920/kubeconfig
	I0821 03:33:48.459793    1442 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0821 03:33:48.466761    1442 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0821 03:33:48.469754    1442 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17102-920/.minikube
	I0821 03:33:48.472801    1442 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0821 03:33:48.476845    1442 driver.go:373] Setting default libvirt URI to qemu:///system
	I0821 03:33:48.479685    1442 out.go:177] * Using the qemu2 driver based on user configuration
	I0821 03:33:48.486794    1442 start.go:298] selected driver: qemu2
	I0821 03:33:48.486801    1442 start.go:902] validating driver "qemu2" against <nil>
	I0821 03:33:48.486809    1442 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0821 03:33:48.488928    1442 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0821 03:33:48.491687    1442 out.go:177] * Automatically selected the socket_vmnet network
	I0821 03:33:48.495787    1442 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0821 03:33:48.495806    1442 cni.go:84] Creating CNI manager for ""
	I0821 03:33:48.495814    1442 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0821 03:33:48.495818    1442 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0821 03:33:48.495823    1442 start_flags.go:319] config:
	{Name:addons-500000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:addons-500000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:c
ni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0821 03:33:48.500226    1442 iso.go:125] acquiring lock: {Name:mk813ea611542195bb0511881888be3fabc72ff6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 03:33:48.506762    1442 out.go:177] * Starting control plane node addons-500000 in cluster addons-500000
	I0821 03:33:48.510761    1442 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime docker
	I0821 03:33:48.510781    1442 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17102-920/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4
	I0821 03:33:48.510799    1442 cache.go:57] Caching tarball of preloaded images
	I0821 03:33:48.510861    1442 preload.go:174] Found /Users/jenkins/minikube-integration/17102-920/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0821 03:33:48.510867    1442 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.4 on docker
	I0821 03:33:48.511057    1442 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/config.json ...
	I0821 03:33:48.511069    1442 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/config.json: {Name:mke6ea6a330608889e821054234e4dab41e05376 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 03:33:48.511283    1442 start.go:365] acquiring machines lock for addons-500000: {Name:mk9b32d9fe994be32d77812db464b2cfa7bfb400 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0821 03:33:48.511397    1442 start.go:369] acquired machines lock for "addons-500000" in 109.25µs
	I0821 03:33:48.511409    1442 start.go:93] Provisioning new machine with config: &{Name:addons-500000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:
addons-500000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0821 03:33:48.511444    1442 start.go:125] createHost starting for "" (driver="qemu2")
	I0821 03:33:48.515777    1442 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0821 03:33:48.825711    1442 start.go:159] libmachine.API.Create for "addons-500000" (driver="qemu2")
	I0821 03:33:48.825759    1442 client.go:168] LocalClient.Create starting
	I0821 03:33:48.825907    1442 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca.pem
	I0821 03:33:48.926786    1442 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/17102-920/.minikube/certs/cert.pem
	I0821 03:33:49.005435    1442 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17102-920/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17102-920/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0821 03:33:49.429478    1442 main.go:141] libmachine: Creating SSH key...
	I0821 03:33:49.603069    1442 main.go:141] libmachine: Creating Disk image...
	I0821 03:33:49.603078    1442 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0821 03:33:49.603290    1442 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/disk.qcow2.raw /Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/disk.qcow2
	I0821 03:33:49.637224    1442 main.go:141] libmachine: STDOUT: 
	I0821 03:33:49.637249    1442 main.go:141] libmachine: STDERR: 
	I0821 03:33:49.637377    1442 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/disk.qcow2 +20000M
	I0821 03:33:49.644766    1442 main.go:141] libmachine: STDOUT: Image resized.
	
	I0821 03:33:49.644778    1442 main.go:141] libmachine: STDERR: 
	I0821 03:33:49.644801    1442 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/disk.qcow2
	I0821 03:33:49.644808    1442 main.go:141] libmachine: Starting QEMU VM...
	I0821 03:33:49.644850    1442 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:15:38:20:81:6d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/disk.qcow2
	I0821 03:33:49.712858    1442 main.go:141] libmachine: STDOUT: 
	I0821 03:33:49.712896    1442 main.go:141] libmachine: STDERR: 
	I0821 03:33:49.712900    1442 main.go:141] libmachine: Attempt 0
	I0821 03:33:49.712923    1442 main.go:141] libmachine: Searching for 5e:15:38:20:81:6d in /var/db/dhcpd_leases ...
	I0821 03:33:51.714037    1442 main.go:141] libmachine: Attempt 1
	I0821 03:33:51.714122    1442 main.go:141] libmachine: Searching for 5e:15:38:20:81:6d in /var/db/dhcpd_leases ...
	I0821 03:33:53.715339    1442 main.go:141] libmachine: Attempt 2
	I0821 03:33:53.715370    1442 main.go:141] libmachine: Searching for 5e:15:38:20:81:6d in /var/db/dhcpd_leases ...
	I0821 03:33:55.716394    1442 main.go:141] libmachine: Attempt 3
	I0821 03:33:55.716406    1442 main.go:141] libmachine: Searching for 5e:15:38:20:81:6d in /var/db/dhcpd_leases ...
	I0821 03:33:57.717443    1442 main.go:141] libmachine: Attempt 4
	I0821 03:33:57.717472    1442 main.go:141] libmachine: Searching for 5e:15:38:20:81:6d in /var/db/dhcpd_leases ...
	I0821 03:33:59.718558    1442 main.go:141] libmachine: Attempt 5
	I0821 03:33:59.718579    1442 main.go:141] libmachine: Searching for 5e:15:38:20:81:6d in /var/db/dhcpd_leases ...
	I0821 03:34:01.719634    1442 main.go:141] libmachine: Attempt 6
	I0821 03:34:01.719657    1442 main.go:141] libmachine: Searching for 5e:15:38:20:81:6d in /var/db/dhcpd_leases ...
	I0821 03:34:01.719810    1442 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0821 03:34:01.719849    1442 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:5e:15:38:20:81:6d ID:1,5e:15:38:20:81:6d Lease:0x64e48f18}
	I0821 03:34:01.719855    1442 main.go:141] libmachine: Found match: 5e:15:38:20:81:6d
	I0821 03:34:01.719867    1442 main.go:141] libmachine: IP: 192.168.105.2
	I0821 03:34:01.719873    1442 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.2)...
	I0821 03:34:03.738025    1442 machine.go:88] provisioning docker machine ...
	I0821 03:34:03.738086    1442 buildroot.go:166] provisioning hostname "addons-500000"
	I0821 03:34:03.739549    1442 main.go:141] libmachine: Using SSH client type: native
	I0821 03:34:03.740347    1442 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102aae1e0] 0x102ab0c40 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0821 03:34:03.740367    1442 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-500000 && echo "addons-500000" | sudo tee /etc/hostname
	I0821 03:34:03.826570    1442 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-500000
	
	I0821 03:34:03.826696    1442 main.go:141] libmachine: Using SSH client type: native
	I0821 03:34:03.827174    1442 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102aae1e0] 0x102ab0c40 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0821 03:34:03.827189    1442 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-500000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-500000/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-500000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0821 03:34:03.891757    1442 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0821 03:34:03.891772    1442 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17102-920/.minikube CaCertPath:/Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17102-920/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17102-920/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17102-920/.minikube}
	I0821 03:34:03.891782    1442 buildroot.go:174] setting up certificates
	I0821 03:34:03.891796    1442 provision.go:83] configureAuth start
	I0821 03:34:03.891801    1442 provision.go:138] copyHostCerts
	I0821 03:34:03.891982    1442 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17102-920/.minikube/ca.pem (1078 bytes)
	I0821 03:34:03.892356    1442 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17102-920/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17102-920/.minikube/cert.pem (1123 bytes)
	I0821 03:34:03.892494    1442 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17102-920/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17102-920/.minikube/key.pem (1679 bytes)
	I0821 03:34:03.892606    1442 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17102-920/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca-key.pem org=jenkins.addons-500000 san=[192.168.105.2 192.168.105.2 localhost 127.0.0.1 minikube addons-500000]
	I0821 03:34:04.055231    1442 provision.go:172] copyRemoteCerts
	I0821 03:34:04.055290    1442 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0821 03:34:04.055299    1442 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/id_rsa Username:docker}
	I0821 03:34:04.085022    1442 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0821 03:34:04.091757    1442 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0821 03:34:04.098302    1442 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0821 03:34:04.105297    1442 provision.go:86] duration metric: configureAuth took 213.489792ms
	I0821 03:34:04.105304    1442 buildroot.go:189] setting minikube options for container-runtime
	I0821 03:34:04.105410    1442 config.go:182] Loaded profile config "addons-500000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0821 03:34:04.105443    1442 main.go:141] libmachine: Using SSH client type: native
	I0821 03:34:04.105658    1442 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102aae1e0] 0x102ab0c40 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0821 03:34:04.105665    1442 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0821 03:34:04.160033    1442 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0821 03:34:04.160039    1442 buildroot.go:70] root file system type: tmpfs
	I0821 03:34:04.160095    1442 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0821 03:34:04.160145    1442 main.go:141] libmachine: Using SSH client type: native
	I0821 03:34:04.160376    1442 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102aae1e0] 0x102ab0c40 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0821 03:34:04.160410    1442 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0821 03:34:04.217511    1442 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0821 03:34:04.217555    1442 main.go:141] libmachine: Using SSH client type: native
	I0821 03:34:04.217777    1442 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102aae1e0] 0x102ab0c40 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0821 03:34:04.217788    1442 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0821 03:34:04.516566    1442 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0821 03:34:04.516576    1442 machine.go:91] provisioned docker machine in 778.543875ms
	I0821 03:34:04.516581    1442 client.go:171] LocalClient.Create took 15.691254833s
	I0821 03:34:04.516600    1442 start.go:167] duration metric: libmachine.API.Create for "addons-500000" took 15.691329875s
	I0821 03:34:04.516605    1442 start.go:300] post-start starting for "addons-500000" (driver="qemu2")
	I0821 03:34:04.516610    1442 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0821 03:34:04.516676    1442 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0821 03:34:04.516684    1442 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/id_rsa Username:docker}
	I0821 03:34:04.547645    1442 ssh_runner.go:195] Run: cat /etc/os-release
	I0821 03:34:04.548977    1442 info.go:137] Remote host: Buildroot 2021.02.12
	I0821 03:34:04.548988    1442 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17102-920/.minikube/addons for local assets ...
	I0821 03:34:04.549067    1442 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17102-920/.minikube/files for local assets ...
	I0821 03:34:04.549094    1442 start.go:303] post-start completed in 32.487208ms
	I0821 03:34:04.549503    1442 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/config.json ...
	I0821 03:34:04.549671    1442 start.go:128] duration metric: createHost completed in 16.038665083s
	I0821 03:34:04.549713    1442 main.go:141] libmachine: Using SSH client type: native
	I0821 03:34:04.549937    1442 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102aae1e0] 0x102ab0c40 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0821 03:34:04.549942    1442 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0821 03:34:04.603319    1442 main.go:141] libmachine: SSH cmd err, output: <nil>: 1692614044.503149419
	
	I0821 03:34:04.603325    1442 fix.go:206] guest clock: 1692614044.503149419
	I0821 03:34:04.603329    1442 fix.go:219] Guest: 2023-08-21 03:34:04.503149419 -0700 PDT Remote: 2023-08-21 03:34:04.549674 -0700 PDT m=+16.153755168 (delta=-46.524581ms)
	I0821 03:34:04.603340    1442 fix.go:190] guest clock delta is within tolerance: -46.524581ms
	I0821 03:34:04.603349    1442 start.go:83] releasing machines lock for "addons-500000", held for 16.092394834s
	I0821 03:34:04.603625    1442 ssh_runner.go:195] Run: cat /version.json
	I0821 03:34:04.603635    1442 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/id_rsa Username:docker}
	I0821 03:34:04.603639    1442 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0821 03:34:04.603685    1442 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/id_rsa Username:docker}
	I0821 03:34:04.631400    1442 ssh_runner.go:195] Run: systemctl --version
	I0821 03:34:04.633303    1442 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0821 03:34:04.675003    1442 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0821 03:34:04.675044    1442 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0821 03:34:04.680093    1442 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0821 03:34:04.680102    1442 start.go:466] detecting cgroup driver to use...
	I0821 03:34:04.680217    1442 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0821 03:34:04.685575    1442 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0821 03:34:04.689003    1442 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0821 03:34:04.692463    1442 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0821 03:34:04.692496    1442 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0821 03:34:04.695492    1442 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0821 03:34:04.698438    1442 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0821 03:34:04.701779    1442 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0821 03:34:04.705308    1442 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0821 03:34:04.708997    1442 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0821 03:34:04.712485    1442 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0821 03:34:04.715157    1442 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0821 03:34:04.718062    1442 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0821 03:34:04.801182    1442 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0821 03:34:04.809752    1442 start.go:466] detecting cgroup driver to use...
	I0821 03:34:04.809829    1442 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0821 03:34:04.815491    1442 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0821 03:34:04.820439    1442 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0821 03:34:04.826330    1442 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0821 03:34:04.831197    1442 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0821 03:34:04.835955    1442 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0821 03:34:04.893707    1442 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0821 03:34:04.899704    1442 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0821 03:34:04.905738    1442 ssh_runner.go:195] Run: which cri-dockerd
	I0821 03:34:04.907314    1442 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0821 03:34:04.910018    1442 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0821 03:34:04.915159    1442 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0821 03:34:04.993497    1442 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0821 03:34:05.073322    1442 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0821 03:34:05.073337    1442 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0821 03:34:05.078736    1442 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0821 03:34:05.148942    1442 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0821 03:34:06.310888    1442 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.161962625s)
	I0821 03:34:06.310946    1442 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0821 03:34:06.389910    1442 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0821 03:34:06.470512    1442 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0821 03:34:06.540771    1442 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0821 03:34:06.608028    1442 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0821 03:34:06.614951    1442 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0821 03:34:06.680856    1442 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0821 03:34:06.705016    1442 start.go:513] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0821 03:34:06.705100    1442 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0821 03:34:06.707492    1442 start.go:534] Will wait 60s for crictl version
	I0821 03:34:06.707526    1442 ssh_runner.go:195] Run: which crictl
	I0821 03:34:06.708906    1442 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0821 03:34:06.723485    1442 start.go:550] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.4
	RuntimeApiVersion:  v1alpha2
	I0821 03:34:06.723553    1442 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0821 03:34:06.733136    1442 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0821 03:34:06.752243    1442 out.go:204] * Preparing Kubernetes v1.27.4 on Docker 24.0.4 ...
	I0821 03:34:06.752395    1442 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0821 03:34:06.753728    1442 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0821 03:34:06.757671    1442 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime docker
	I0821 03:34:06.757717    1442 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0821 03:34:06.767699    1442 docker.go:636] Got preloaded images: 
	I0821 03:34:06.767706    1442 docker.go:642] registry.k8s.io/kube-apiserver:v1.27.4 wasn't preloaded
	I0821 03:34:06.767758    1442 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0821 03:34:06.770623    1442 ssh_runner.go:195] Run: which lz4
	I0821 03:34:06.772016    1442 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0821 03:34:06.773407    1442 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0821 03:34:06.773426    1442 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (343658271 bytes)
	I0821 03:34:08.065715    1442 docker.go:600] Took 1.293779 seconds to copy over tarball
	I0821 03:34:08.065776    1442 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0821 03:34:09.083194    1442 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.017432542s)
	I0821 03:34:09.083208    1442 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0821 03:34:09.098174    1442 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0821 03:34:09.101758    1442 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0821 03:34:09.107271    1442 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0821 03:34:09.185186    1442 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0821 03:34:11.583398    1442 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.398262792s)
	I0821 03:34:11.583497    1442 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0821 03:34:11.599112    1442 docker.go:636] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.4
	registry.k8s.io/kube-controller-manager:v1.27.4
	registry.k8s.io/kube-scheduler:v1.27.4
	registry.k8s.io/kube-proxy:v1.27.4
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0821 03:34:11.599121    1442 cache_images.go:84] Images are preloaded, skipping loading
	I0821 03:34:11.599173    1442 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0821 03:34:11.606813    1442 cni.go:84] Creating CNI manager for ""
	I0821 03:34:11.606822    1442 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0821 03:34:11.606852    1442 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0821 03:34:11.606862    1442 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.2 APIServerPort:8443 KubernetesVersion:v1.27.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-500000 NodeName:addons-500000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0821 03:34:11.606930    1442 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-500000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0821 03:34:11.606959    1442 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=addons-500000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.4 ClusterName:addons-500000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0821 03:34:11.607013    1442 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.4
	I0821 03:34:11.609958    1442 binaries.go:44] Found k8s binaries, skipping transfer
	I0821 03:34:11.609992    1442 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0821 03:34:11.613080    1442 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I0821 03:34:11.618135    1442 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0821 03:34:11.623217    1442 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2099 bytes)
	I0821 03:34:11.628067    1442 ssh_runner.go:195] Run: grep 192.168.105.2	control-plane.minikube.internal$ /etc/hosts
	I0821 03:34:11.629338    1442 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0821 03:34:11.633264    1442 certs.go:56] Setting up /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000 for IP: 192.168.105.2
	I0821 03:34:11.633272    1442 certs.go:190] acquiring lock for shared ca certs: {Name:mkaf8bee91c9bef113528e728629bac5c142d5d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 03:34:11.633419    1442 certs.go:204] generating minikubeCA CA: /Users/jenkins/minikube-integration/17102-920/.minikube/ca.key
	I0821 03:34:11.709497    1442 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17102-920/.minikube/ca.crt ...
	I0821 03:34:11.709504    1442 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17102-920/.minikube/ca.crt: {Name:mk11304afc04d282dffa1bbfafecb7763b86f0d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 03:34:11.709741    1442 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17102-920/.minikube/ca.key ...
	I0821 03:34:11.709747    1442 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17102-920/.minikube/ca.key: {Name:mk7632addcfceaabe09bce428c8dd59051132a6a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 03:34:11.709856    1442 certs.go:204] generating proxyClientCA CA: /Users/jenkins/minikube-integration/17102-920/.minikube/proxy-client-ca.key
	I0821 03:34:11.928292    1442 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17102-920/.minikube/proxy-client-ca.crt ...
	I0821 03:34:11.928298    1442 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17102-920/.minikube/proxy-client-ca.crt: {Name:mk59ba2d6f1e462ee2e456d21a76e6acaba82b70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 03:34:11.928531    1442 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17102-920/.minikube/proxy-client-ca.key ...
	I0821 03:34:11.928534    1442 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17102-920/.minikube/proxy-client-ca.key: {Name:mk02c96134c44ce7714696be07e0b5c22f58dc64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 03:34:11.928684    1442 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/client.key
	I0821 03:34:11.928691    1442 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/client.crt with IP's: []
	I0821 03:34:12.116170    1442 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/client.crt ...
	I0821 03:34:12.116177    1442 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/client.crt: {Name:mk3182b685506ec2dbfcad41054e3ffc2bf0f3b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 03:34:12.116379    1442 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/client.key ...
	I0821 03:34:12.116384    1442 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/client.key: {Name:mk087ee0a568a92e1e97ae6eb06dd6604454b2e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 03:34:12.116489    1442 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/apiserver.key.96055969
	I0821 03:34:12.116499    1442 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/apiserver.crt.96055969 with IP's: [192.168.105.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0821 03:34:12.174634    1442 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/apiserver.crt.96055969 ...
	I0821 03:34:12.174637    1442 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/apiserver.crt.96055969: {Name:mk02f137a3a75334a28e6811666f6d1dde47709c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 03:34:12.174771    1442 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/apiserver.key.96055969 ...
	I0821 03:34:12.174774    1442 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/apiserver.key.96055969: {Name:mk629f60ce1370d0aadb852a255428713cef631b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 03:34:12.174873    1442 certs.go:337] copying /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/apiserver.crt.96055969 -> /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/apiserver.crt
	I0821 03:34:12.175028    1442 certs.go:341] copying /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/apiserver.key.96055969 -> /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/apiserver.key
	I0821 03:34:12.175114    1442 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/proxy-client.key
	I0821 03:34:12.175123    1442 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/proxy-client.crt with IP's: []
	I0821 03:34:12.291172    1442 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/proxy-client.crt ...
	I0821 03:34:12.291175    1442 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/proxy-client.crt: {Name:mk4861ba5de37ed8d82543663b167ed0e04664dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 03:34:12.291331    1442 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/proxy-client.key ...
	I0821 03:34:12.291334    1442 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/proxy-client.key: {Name:mk5eb1fb206858f7f6262a3b86ec8673fdeb4399 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 03:34:12.291586    1442 certs.go:437] found cert: /Users/jenkins/minikube-integration/17102-920/.minikube/certs/Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca-key.pem (1679 bytes)
	I0821 03:34:12.291611    1442 certs.go:437] found cert: /Users/jenkins/minikube-integration/17102-920/.minikube/certs/Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca.pem (1078 bytes)
	I0821 03:34:12.291633    1442 certs.go:437] found cert: /Users/jenkins/minikube-integration/17102-920/.minikube/certs/Users/jenkins/minikube-integration/17102-920/.minikube/certs/cert.pem (1123 bytes)
	I0821 03:34:12.291654    1442 certs.go:437] found cert: /Users/jenkins/minikube-integration/17102-920/.minikube/certs/Users/jenkins/minikube-integration/17102-920/.minikube/certs/key.pem (1679 bytes)
	I0821 03:34:12.292029    1442 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0821 03:34:12.300489    1442 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0821 03:34:12.307765    1442 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0821 03:34:12.314499    1442 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0821 03:34:12.321449    1442 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0821 03:34:12.328965    1442 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0821 03:34:12.336085    1442 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0821 03:34:12.342676    1442 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0821 03:34:12.349529    1442 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0821 03:34:12.356907    1442 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0821 03:34:12.363000    1442 ssh_runner.go:195] Run: openssl version
	I0821 03:34:12.364943    1442 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0821 03:34:12.368659    1442 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0821 03:34:12.370316    1442 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 21 10:34 /usr/share/ca-certificates/minikubeCA.pem
	I0821 03:34:12.370337    1442 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0821 03:34:12.372170    1442 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0821 03:34:12.375051    1442 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0821 03:34:12.376254    1442 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0821 03:34:12.376292    1442 kubeadm.go:404] StartCluster: {Name:addons-500000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:addons-500000 Namespac
e:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mount
UID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0821 03:34:12.376353    1442 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0821 03:34:12.381765    1442 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0821 03:34:12.385127    1442 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0821 03:34:12.388050    1442 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0821 03:34:12.390699    1442 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0821 03:34:12.390714    1442 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0821 03:34:12.412358    1442 kubeadm.go:322] [init] Using Kubernetes version: v1.27.4
	I0821 03:34:12.412390    1442 kubeadm.go:322] [preflight] Running pre-flight checks
	I0821 03:34:12.465080    1442 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0821 03:34:12.465135    1442 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0821 03:34:12.465183    1442 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0821 03:34:12.530098    1442 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0821 03:34:12.539343    1442 out.go:204]   - Generating certificates and keys ...
	I0821 03:34:12.539375    1442 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0821 03:34:12.539413    1442 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0821 03:34:12.639909    1442 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0821 03:34:12.680054    1442 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0821 03:34:12.714095    1442 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0821 03:34:12.849965    1442 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0821 03:34:12.996137    1442 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0821 03:34:12.996199    1442 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-500000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0821 03:34:13.141022    1442 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0821 03:34:13.141102    1442 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-500000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0821 03:34:13.228117    1442 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0821 03:34:13.409230    1442 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0821 03:34:13.774136    1442 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0821 03:34:13.774180    1442 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0821 03:34:13.866700    1442 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0821 03:34:13.977782    1442 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0821 03:34:14.068222    1442 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0821 03:34:14.144551    1442 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0821 03:34:14.151809    1442 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0821 03:34:14.152307    1442 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0821 03:34:14.152438    1442 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0821 03:34:14.228545    1442 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0821 03:34:14.232527    1442 out.go:204]   - Booting up control plane ...
	I0821 03:34:14.232575    1442 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0821 03:34:14.232614    1442 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0821 03:34:14.232645    1442 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0821 03:34:14.236440    1442 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0821 03:34:14.238376    1442 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0821 03:34:18.241227    1442 kubeadm.go:322] [apiclient] All control plane components are healthy after 4.002539 seconds
	I0821 03:34:18.241427    1442 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0821 03:34:18.252886    1442 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0821 03:34:18.774491    1442 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0821 03:34:18.774728    1442 kubeadm.go:322] [mark-control-plane] Marking the node addons-500000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0821 03:34:19.280325    1442 kubeadm.go:322] [bootstrap-token] Using token: jvxtql.8wgzhr7nb5g9o93n
	I0821 03:34:19.286479    1442 out.go:204]   - Configuring RBAC rules ...
	I0821 03:34:19.286537    1442 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0821 03:34:19.290363    1442 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0821 03:34:19.293121    1442 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0821 03:34:19.294256    1442 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0821 03:34:19.295736    1442 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0821 03:34:19.296773    1442 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0821 03:34:19.301173    1442 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0821 03:34:19.474355    1442 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0821 03:34:19.693544    1442 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0821 03:34:19.694011    1442 kubeadm.go:322] 
	I0821 03:34:19.694043    1442 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0821 03:34:19.694047    1442 kubeadm.go:322] 
	I0821 03:34:19.694084    1442 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0821 03:34:19.694086    1442 kubeadm.go:322] 
	I0821 03:34:19.694099    1442 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0821 03:34:19.694192    1442 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0821 03:34:19.694216    1442 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0821 03:34:19.694219    1442 kubeadm.go:322] 
	I0821 03:34:19.694251    1442 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0821 03:34:19.694263    1442 kubeadm.go:322] 
	I0821 03:34:19.694293    1442 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0821 03:34:19.694296    1442 kubeadm.go:322] 
	I0821 03:34:19.694320    1442 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0821 03:34:19.694360    1442 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0821 03:34:19.694390    1442 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0821 03:34:19.694394    1442 kubeadm.go:322] 
	I0821 03:34:19.694446    1442 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0821 03:34:19.694488    1442 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0821 03:34:19.694495    1442 kubeadm.go:322] 
	I0821 03:34:19.694535    1442 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token jvxtql.8wgzhr7nb5g9o93n \
	I0821 03:34:19.694617    1442 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:c361d9930575cb4141f86c9c696a425212668e350af0245a5e7de41b1bd48407 \
	I0821 03:34:19.694632    1442 kubeadm.go:322] 	--control-plane 
	I0821 03:34:19.694634    1442 kubeadm.go:322] 
	I0821 03:34:19.694684    1442 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0821 03:34:19.694688    1442 kubeadm.go:322] 
	I0821 03:34:19.694735    1442 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token jvxtql.8wgzhr7nb5g9o93n \
	I0821 03:34:19.694782    1442 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:c361d9930575cb4141f86c9c696a425212668e350af0245a5e7de41b1bd48407 
	I0821 03:34:19.694835    1442 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0821 03:34:19.694840    1442 cni.go:84] Creating CNI manager for ""
	I0821 03:34:19.694847    1442 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0821 03:34:19.703814    1442 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0821 03:34:19.707890    1442 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0821 03:34:19.711023    1442 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0821 03:34:19.716873    1442 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0821 03:34:19.716924    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:19.716951    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43 minikube.k8s.io/name=addons-500000 minikube.k8s.io/updated_at=2023_08_21T03_34_19_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:19.723924    1442 ops.go:34] apiserver oom_adj: -16
	I0821 03:34:19.767999    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:19.814902    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:20.352169    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:20.852188    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:21.352164    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:21.852123    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:22.352346    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:22.852184    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:23.352159    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:23.852279    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:24.352116    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:24.852182    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:25.352203    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:25.852083    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:26.352293    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:26.852062    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:27.352046    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:27.851991    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:28.352173    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:28.851976    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:29.352173    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:29.851943    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:30.352016    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:30.851904    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:31.351923    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:31.851905    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:32.351835    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:32.388500    1442 kubeadm.go:1081] duration metric: took 12.671972458s to wait for elevateKubeSystemPrivileges.
	I0821 03:34:32.388516    1442 kubeadm.go:406] StartCluster complete in 20.01278175s
	I0821 03:34:32.388525    1442 settings.go:142] acquiring lock: {Name:mkeb461ec3a6a92ee32ce41e8df63d6759cb2728 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 03:34:32.388680    1442 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/17102-920/kubeconfig
	I0821 03:34:32.388902    1442 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17102-920/kubeconfig: {Name:mk2bc9c64ad130c36a0253707ac2ba3f8fd22371 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 03:34:32.389107    1442 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0821 03:34:32.389147    1442 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:true]
	I0821 03:34:32.389221    1442 addons.go:69] Setting volumesnapshots=true in profile "addons-500000"
	I0821 03:34:32.389227    1442 addons.go:231] Setting addon volumesnapshots=true in "addons-500000"
	I0821 03:34:32.389225    1442 addons.go:69] Setting cloud-spanner=true in profile "addons-500000"
	I0821 03:34:32.389236    1442 addons.go:231] Setting addon cloud-spanner=true in "addons-500000"
	I0821 03:34:32.389251    1442 config.go:182] Loaded profile config "addons-500000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0821 03:34:32.389271    1442 host.go:66] Checking if "addons-500000" exists ...
	I0821 03:34:32.389279    1442 addons.go:69] Setting storage-provisioner=true in profile "addons-500000"
	I0821 03:34:32.389222    1442 addons.go:69] Setting gcp-auth=true in profile "addons-500000"
	I0821 03:34:32.389282    1442 addons.go:231] Setting addon storage-provisioner=true in "addons-500000"
	I0821 03:34:32.389288    1442 mustload.go:65] Loading cluster: addons-500000
	I0821 03:34:32.389299    1442 host.go:66] Checking if "addons-500000" exists ...
	I0821 03:34:32.389299    1442 addons.go:69] Setting inspektor-gadget=true in profile "addons-500000"
	I0821 03:34:32.389327    1442 addons.go:69] Setting registry=true in profile "addons-500000"
	I0821 03:34:32.389360    1442 config.go:182] Loaded profile config "addons-500000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0821 03:34:32.389358    1442 addons.go:69] Setting ingress-dns=true in profile "addons-500000"
	I0821 03:34:32.389378    1442 addons.go:231] Setting addon ingress-dns=true in "addons-500000"
	I0821 03:34:32.389273    1442 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-500000"
	I0821 03:34:32.389396    1442 addons.go:69] Setting ingress=true in profile "addons-500000"
	I0821 03:34:32.389434    1442 host.go:66] Checking if "addons-500000" exists ...
	I0821 03:34:32.389418    1442 addons.go:69] Setting metrics-server=true in profile "addons-500000"
	I0821 03:34:32.389454    1442 addons.go:231] Setting addon metrics-server=true in "addons-500000"
	I0821 03:34:32.389465    1442 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-500000"
	I0821 03:34:32.389506    1442 host.go:66] Checking if "addons-500000" exists ...
	I0821 03:34:32.389519    1442 host.go:66] Checking if "addons-500000" exists ...
	I0821 03:34:32.389271    1442 host.go:66] Checking if "addons-500000" exists ...
	W0821 03:34:32.389564    1442 host.go:54] host status for "addons-500000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/monitor: connect: connection refused
	W0821 03:34:32.389572    1442 addons.go:277] "addons-500000" is not running, setting storage-provisioner=true and skipping enablement (err=<nil>)
	I0821 03:34:32.389347    1442 addons.go:231] Setting addon inspektor-gadget=true in "addons-500000"
	I0821 03:34:32.389693    1442 host.go:66] Checking if "addons-500000" exists ...
	W0821 03:34:32.389757    1442 host.go:54] host status for "addons-500000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/monitor: connect: connection refused
	W0821 03:34:32.389767    1442 addons.go:277] "addons-500000" is not running, setting cloud-spanner=true and skipping enablement (err=<nil>)
	I0821 03:34:32.389367    1442 addons.go:231] Setting addon registry=true in "addons-500000"
	I0821 03:34:32.389786    1442 host.go:66] Checking if "addons-500000" exists ...
	W0821 03:34:32.389790    1442 host.go:54] host status for "addons-500000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/monitor: connect: connection refused
	W0821 03:34:32.389796    1442 addons.go:277] "addons-500000" is not running, setting metrics-server=true and skipping enablement (err=<nil>)
	I0821 03:34:32.389799    1442 addons.go:467] Verifying addon metrics-server=true in "addons-500000"
	W0821 03:34:32.389788    1442 host.go:54] host status for "addons-500000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/monitor: connect: connection refused
	W0821 03:34:32.389803    1442 addons.go:277] "addons-500000" is not running, setting csi-hostpath-driver=true and skipping enablement (err=<nil>)
	I0821 03:34:32.389805    1442 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-500000"
	I0821 03:34:32.389275    1442 addons.go:69] Setting default-storageclass=true in profile "addons-500000"
	I0821 03:34:32.394058    1442 out.go:177] * Verifying csi-hostpath-driver addon...
	I0821 03:34:32.389436    1442 addons.go:231] Setting addon ingress=true in "addons-500000"
	I0821 03:34:32.389868    1442 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-500000"
	W0821 03:34:32.389953    1442 host.go:54] host status for "addons-500000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/monitor: connect: connection refused
	W0821 03:34:32.390033    1442 host.go:54] host status for "addons-500000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/monitor: connect: connection refused
	W0821 03:34:32.390053    1442 host.go:54] host status for "addons-500000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/monitor: connect: connection refused
	I0821 03:34:32.390510    1442 host.go:66] Checking if "addons-500000" exists ...
	I0821 03:34:32.409190    1442 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	W0821 03:34:32.404296    1442 addons.go:277] "addons-500000" is not running, setting ingress-dns=true and skipping enablement (err=<nil>)
	W0821 03:34:32.404342    1442 addons.go:277] "addons-500000" is not running, setting inspektor-gadget=true and skipping enablement (err=<nil>)
	W0821 03:34:32.404346    1442 addons.go:277] "addons-500000" is not running, setting registry=true and skipping enablement (err=<nil>)
	I0821 03:34:32.404410    1442 host.go:66] Checking if "addons-500000" exists ...
	I0821 03:34:32.404764    1442 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0821 03:34:32.413218    1442 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0821 03:34:32.413224    1442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0821 03:34:32.413232    1442 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/id_rsa Username:docker}
	I0821 03:34:32.413266    1442 addons.go:467] Verifying addon registry=true in "addons-500000"
	I0821 03:34:32.418274    1442 out.go:177] * Verifying registry addon...
	I0821 03:34:32.419795    1442 addons.go:231] Setting addon default-storageclass=true in "addons-500000"
	I0821 03:34:32.419868    1442 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-500000" context rescaled to 1 replicas
	I0821 03:34:32.420817    1442 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0821 03:34:32.421498    1442 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0821 03:34:32.421694    1442 host.go:66] Checking if "addons-500000" exists ...
	I0821 03:34:32.421701    1442 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0821 03:34:32.421849    1442 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0821 03:34:32.431173    1442 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0821 03:34:32.440212    1442 out.go:177] * Verifying Kubernetes components...
	I0821 03:34:32.431974    1442 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0821 03:34:32.435186    1442 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=registry
	I0821 03:34:32.444202    1442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0821 03:34:32.444209    1442 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0821 03:34:32.447466    1442 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0821 03:34:32.448196    1442 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.8.1
	I0821 03:34:32.448211    1442 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/id_rsa Username:docker}
	I0821 03:34:32.451292    1442 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0821 03:34:32.451299    1442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16083 bytes)
	I0821 03:34:32.451306    1442 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/id_rsa Username:docker}
	I0821 03:34:32.454351    1442 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0821 03:34:32.454358    1442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0821 03:34:32.485876    1442 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0821 03:34:32.485886    1442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0821 03:34:32.513135    1442 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0821 03:34:32.513147    1442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0821 03:34:32.532036    1442 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0821 03:34:32.532052    1442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0821 03:34:32.537566    1442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0821 03:34:32.542495    1442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0821 03:34:32.548533    1442 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0821 03:34:32.548541    1442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0821 03:34:32.568087    1442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0821 03:34:33.517324    1442 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.069159875s)
	I0821 03:34:33.517338    1442 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.069147125s)
	I0821 03:34:33.517342    1442 start.go:901] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I0821 03:34:33.517808    1442 node_ready.go:35] waiting up to 6m0s for node "addons-500000" to be "Ready" ...
	I0821 03:34:33.519592    1442 node_ready.go:49] node "addons-500000" has status "Ready":"True"
	I0821 03:34:33.519599    1442 node_ready.go:38] duration metric: took 1.779708ms waiting for node "addons-500000" to be "Ready" ...
	I0821 03:34:33.519602    1442 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0821 03:34:33.522687    1442 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-97rp7" in "kube-system" namespace to be "Ready" ...
	I0821 03:34:33.964195    1442 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.421717084s)
	I0821 03:34:33.964211    1442 addons.go:467] Verifying addon ingress=true in "addons-500000"
	I0821 03:34:33.968723    1442 out.go:177] * Verifying ingress addon...
	I0821 03:34:33.964338    1442 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.396275834s)
	W0821 03:34:33.968774    1442 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0821 03:34:33.975741    1442 retry.go:31] will retry after 231.591556ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0821 03:34:33.976141    1442 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0821 03:34:33.984299    1442 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0821 03:34:33.984307    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:33.987720    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:34.207434    1442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0821 03:34:34.491123    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:34.991180    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:35.490538    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:35.534205    1442 pod_ready.go:102] pod "coredns-5d78c9869d-97rp7" in "kube-system" namespace has status "Ready":"False"
	I0821 03:34:35.990628    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:36.490998    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:36.745839    1442 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.5384555s)
	I0821 03:34:36.990793    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:37.491119    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:37.534210    1442 pod_ready.go:102] pod "coredns-5d78c9869d-97rp7" in "kube-system" namespace has status "Ready":"False"
	I0821 03:34:37.990643    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:38.490772    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:38.997287    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:39.008172    1442 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0821 03:34:39.008186    1442 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/id_rsa Username:docker}
	I0821 03:34:39.055480    1442 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0821 03:34:39.064828    1442 addons.go:231] Setting addon gcp-auth=true in "addons-500000"
	I0821 03:34:39.064858    1442 host.go:66] Checking if "addons-500000" exists ...
	I0821 03:34:39.065649    1442 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0821 03:34:39.065660    1442 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/id_rsa Username:docker}
	I0821 03:34:39.100776    1442 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0821 03:34:39.103705    1442 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0821 03:34:39.107726    1442 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0821 03:34:39.107734    1442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0821 03:34:39.113078    1442 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0821 03:34:39.113087    1442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0821 03:34:39.127541    1442 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0821 03:34:39.127551    1442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5412 bytes)
	I0821 03:34:39.133486    1442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0821 03:34:39.491109    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:39.534694    1442 pod_ready.go:102] pod "coredns-5d78c9869d-97rp7" in "kube-system" namespace has status "Ready":"False"
	I0821 03:34:39.629710    1442 addons.go:467] Verifying addon gcp-auth=true in "addons-500000"
	I0821 03:34:39.641410    1442 out.go:177] * Verifying gcp-auth addon...
	I0821 03:34:39.650441    1442 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0821 03:34:39.656554    1442 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0821 03:34:39.656563    1442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 03:34:39.658191    1442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 03:34:39.991177    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:40.161154    1442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 03:34:40.492443    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:40.660810    1442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 03:34:40.990558    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:41.161357    1442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 03:34:41.492269    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:41.534695    1442 pod_ready.go:102] pod "coredns-5d78c9869d-97rp7" in "kube-system" namespace has status "Ready":"False"
	I0821 03:34:41.660947    1442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 03:34:41.990678    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:42.161013    1442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 03:34:42.490658    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:42.660884    1442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 03:34:42.990530    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:43.161042    1442 kapi.go:107] duration metric: took 3.510698166s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0821 03:34:43.165184    1442 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-500000 cluster.
	I0821 03:34:43.169238    1442 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0821 03:34:43.173158    1442 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0821 03:34:43.491145    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:43.534713    1442 pod_ready.go:97] pod "coredns-5d78c9869d-97rp7" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-08-21 03:34:32 -0700 PDT Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-08-21 03:34:32 -0700 PDT Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-08-21 03:34:32 -0700 PDT Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-08-21 03:34:32 -0700 PDT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.105.2 HostIPs:[] PodIP: PodIPs:[] StartTime:2023-08-21 03:34:32 -0700 PDT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerSt
ateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2023-08-21 03:34:33 -0700 PDT,FinishedAt:2023-08-21 03:34:43 -0700 PDT,ContainerID:docker://d9032391cb53f0fa8cfd4e1696eef2d7eb7096ba08423fd5087bb7b4d2fba5ed,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e ContainerID:docker://d9032391cb53f0fa8cfd4e1696eef2d7eb7096ba08423fd5087bb7b4d2fba5ed Started:0x140018d39a0 AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0821 03:34:43.534727    1442 pod_ready.go:81] duration metric: took 10.012309458s waiting for pod "coredns-5d78c9869d-97rp7" in "kube-system" namespace to be "Ready" ...
	E0821 03:34:43.534732    1442 pod_ready.go:66] WaitExtra: waitPodCondition: pod "coredns-5d78c9869d-97rp7" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-08-21 03:34:32 -0700 PDT Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-08-21 03:34:32 -0700 PDT Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-08-21 03:34:32 -0700 PDT Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-08-21 03:34:32 -0700 PDT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.105.2 HostIPs:[] PodIP: PodIPs:[] StartTime:2023-08-21 03:34:32 -0700 PDT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Runnin
g:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2023-08-21 03:34:33 -0700 PDT,FinishedAt:2023-08-21 03:34:43 -0700 PDT,ContainerID:docker://d9032391cb53f0fa8cfd4e1696eef2d7eb7096ba08423fd5087bb7b4d2fba5ed,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e ContainerID:docker://d9032391cb53f0fa8cfd4e1696eef2d7eb7096ba08423fd5087bb7b4d2fba5ed Started:0x140018d39a0 AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0821 03:34:43.534736    1442 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-hbg44" in "kube-system" namespace to be "Ready" ...
	I0821 03:34:43.537136    1442 pod_ready.go:92] pod "coredns-5d78c9869d-hbg44" in "kube-system" namespace has status "Ready":"True"
	I0821 03:34:43.537140    1442 pod_ready.go:81] duration metric: took 2.400375ms waiting for pod "coredns-5d78c9869d-hbg44" in "kube-system" namespace to be "Ready" ...
	I0821 03:34:43.537145    1442 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-500000" in "kube-system" namespace to be "Ready" ...
	I0821 03:34:43.539758    1442 pod_ready.go:92] pod "etcd-addons-500000" in "kube-system" namespace has status "Ready":"True"
	I0821 03:34:43.539762    1442 pod_ready.go:81] duration metric: took 2.614916ms waiting for pod "etcd-addons-500000" in "kube-system" namespace to be "Ready" ...
	I0821 03:34:43.539766    1442 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-500000" in "kube-system" namespace to be "Ready" ...
	I0821 03:34:43.542039    1442 pod_ready.go:92] pod "kube-apiserver-addons-500000" in "kube-system" namespace has status "Ready":"True"
	I0821 03:34:43.542045    1442 pod_ready.go:81] duration metric: took 2.276584ms waiting for pod "kube-apiserver-addons-500000" in "kube-system" namespace to be "Ready" ...
	I0821 03:34:43.542049    1442 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-500000" in "kube-system" namespace to be "Ready" ...
	I0821 03:34:43.544341    1442 pod_ready.go:92] pod "kube-controller-manager-addons-500000" in "kube-system" namespace has status "Ready":"True"
	I0821 03:34:43.544345    1442 pod_ready.go:81] duration metric: took 2.2935ms waiting for pod "kube-controller-manager-addons-500000" in "kube-system" namespace to be "Ready" ...
	I0821 03:34:43.544348    1442 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-z2wj9" in "kube-system" namespace to be "Ready" ...
	I0821 03:34:43.933736    1442 pod_ready.go:92] pod "kube-proxy-z2wj9" in "kube-system" namespace has status "Ready":"True"
	I0821 03:34:43.933748    1442 pod_ready.go:81] duration metric: took 389.407375ms waiting for pod "kube-proxy-z2wj9" in "kube-system" namespace to be "Ready" ...
	I0821 03:34:43.933752    1442 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-500000" in "kube-system" namespace to be "Ready" ...
	I0821 03:34:43.990470    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:44.334535    1442 pod_ready.go:92] pod "kube-scheduler-addons-500000" in "kube-system" namespace has status "Ready":"True"
	I0821 03:34:44.334545    1442 pod_ready.go:81] duration metric: took 400.801125ms waiting for pod "kube-scheduler-addons-500000" in "kube-system" namespace to be "Ready" ...
	I0821 03:34:44.334549    1442 pod_ready.go:38] duration metric: took 10.81524225s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0821 03:34:44.334558    1442 api_server.go:52] waiting for apiserver process to appear ...
	I0821 03:34:44.334639    1442 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0821 03:34:44.339980    1442 api_server.go:72] duration metric: took 11.909098333s to wait for apiserver process to appear ...
	I0821 03:34:44.339987    1442 api_server.go:88] waiting for apiserver healthz status ...
	I0821 03:34:44.339993    1442 api_server.go:253] Checking apiserver healthz at https://192.168.105.2:8443/healthz ...
	I0821 03:34:44.344178    1442 api_server.go:279] https://192.168.105.2:8443/healthz returned 200:
	ok
	I0821 03:34:44.344920    1442 api_server.go:141] control plane version: v1.27.4
	I0821 03:34:44.344925    1442 api_server.go:131] duration metric: took 4.936ms to wait for apiserver health ...
	I0821 03:34:44.344929    1442 system_pods.go:43] waiting for kube-system pods to appear ...
	I0821 03:34:44.490452    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:44.535983    1442 system_pods.go:59] 8 kube-system pods found
	I0821 03:34:44.535991    1442 system_pods.go:61] "coredns-5d78c9869d-hbg44" [2212048e-385c-4235-ad14-1b9e4e812106] Running
	I0821 03:34:44.535994    1442 system_pods.go:61] "etcd-addons-500000" [dcde2eed-b2a3-4b2d-af51-14d42189714c] Running
	I0821 03:34:44.536011    1442 system_pods.go:61] "kube-apiserver-addons-500000" [a4c38aeb-a7ef-4239-ac34-2437f9c67d96] Running
	I0821 03:34:44.536015    1442 system_pods.go:61] "kube-controller-manager-addons-500000" [972b1e42-cd56-4f77-ad52-a1df2b79fdae] Running
	I0821 03:34:44.536018    1442 system_pods.go:61] "kube-proxy-z2wj9" [56cdd0e9-2b8f-476e-be08-a52381eecb16] Running
	I0821 03:34:44.536020    1442 system_pods.go:61] "kube-scheduler-addons-500000" [c2d2f1e5-45c6-48a9-990d-7e32d9d75976] Running
	I0821 03:34:44.536022    1442 system_pods.go:61] "snapshot-controller-75bbb956b9-4pgqh" [7452ce04-2fbb-4f7a-9e5f-87b8b577fc94] Running
	I0821 03:34:44.536025    1442 system_pods.go:61] "snapshot-controller-75bbb956b9-j9mkf" [dbd2a297-29a5-4435-8fb1-849d8ae91771] Running
	I0821 03:34:44.536028    1442 system_pods.go:74] duration metric: took 191.1015ms to wait for pod list to return data ...
	I0821 03:34:44.536033    1442 default_sa.go:34] waiting for default service account to be created ...
	I0821 03:34:44.734042    1442 default_sa.go:45] found service account: "default"
	I0821 03:34:44.734051    1442 default_sa.go:55] duration metric: took 198.020583ms for default service account to be created ...
	I0821 03:34:44.734055    1442 system_pods.go:116] waiting for k8s-apps to be running ...
	I0821 03:34:44.935348    1442 system_pods.go:86] 8 kube-system pods found
	I0821 03:34:44.935359    1442 system_pods.go:89] "coredns-5d78c9869d-hbg44" [2212048e-385c-4235-ad14-1b9e4e812106] Running
	I0821 03:34:44.935362    1442 system_pods.go:89] "etcd-addons-500000" [dcde2eed-b2a3-4b2d-af51-14d42189714c] Running
	I0821 03:34:44.935365    1442 system_pods.go:89] "kube-apiserver-addons-500000" [a4c38aeb-a7ef-4239-ac34-2437f9c67d96] Running
	I0821 03:34:44.935367    1442 system_pods.go:89] "kube-controller-manager-addons-500000" [972b1e42-cd56-4f77-ad52-a1df2b79fdae] Running
	I0821 03:34:44.935369    1442 system_pods.go:89] "kube-proxy-z2wj9" [56cdd0e9-2b8f-476e-be08-a52381eecb16] Running
	I0821 03:34:44.935372    1442 system_pods.go:89] "kube-scheduler-addons-500000" [c2d2f1e5-45c6-48a9-990d-7e32d9d75976] Running
	I0821 03:34:44.935374    1442 system_pods.go:89] "snapshot-controller-75bbb956b9-4pgqh" [7452ce04-2fbb-4f7a-9e5f-87b8b577fc94] Running
	I0821 03:34:44.935376    1442 system_pods.go:89] "snapshot-controller-75bbb956b9-j9mkf" [dbd2a297-29a5-4435-8fb1-849d8ae91771] Running
	I0821 03:34:44.935380    1442 system_pods.go:126] duration metric: took 201.327917ms to wait for k8s-apps to be running ...
	I0821 03:34:44.935391    1442 system_svc.go:44] waiting for kubelet service to be running ....
	I0821 03:34:44.935475    1442 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0821 03:34:44.941643    1442 system_svc.go:56] duration metric: took 6.252209ms WaitForService to wait for kubelet.
	I0821 03:34:44.941651    1442 kubeadm.go:581] duration metric: took 12.5107865s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0821 03:34:44.941660    1442 node_conditions.go:102] verifying NodePressure condition ...
	I0821 03:34:44.990746    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:45.134674    1442 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0821 03:34:45.134706    1442 node_conditions.go:123] node cpu capacity is 2
	I0821 03:34:45.134712    1442 node_conditions.go:105] duration metric: took 193.055083ms to run NodePressure ...
	I0821 03:34:45.134717    1442 start.go:228] waiting for startup goroutines ...
	I0821 03:34:45.490470    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:45.990643    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:46.490327    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:46.990587    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:47.490536    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:47.990358    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:48.490279    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:48.990490    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:49.490328    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:49.990414    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:50.490337    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:50.990260    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:51.490639    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:51.989843    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:52.490813    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:52.990112    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:53.491005    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:53.992627    1442 kapi.go:107] duration metric: took 20.017033875s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0821 03:40:32.405313    1442 kapi.go:107] duration metric: took 6m0.010490834s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	W0821 03:40:32.405643    1442 out.go:239] ! Enabling 'csi-hostpath-driver' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=csi-hostpath-driver pods: context deadline exceeded]
	I0821 03:40:32.421828    1442 kapi.go:107] duration metric: took 6m0.009978583s to wait for kubernetes.io/minikube-addons=registry ...
	W0821 03:40:32.421921    1442 out.go:239] ! Enabling 'registry' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=registry pods: context deadline exceeded]
	I0821 03:40:32.430174    1442 out.go:177] * Enabled addons: storage-provisioner, cloud-spanner, metrics-server, ingress-dns, inspektor-gadget, default-storageclass, volumesnapshots, gcp-auth, ingress
	I0821 03:40:32.437176    1442 addons.go:502] enable addons completed in 6m0.058033333s: enabled=[storage-provisioner cloud-spanner metrics-server ingress-dns inspektor-gadget default-storageclass volumesnapshots gcp-auth ingress]
	I0821 03:40:32.437214    1442 start.go:233] waiting for cluster config update ...
	I0821 03:40:32.437252    1442 start.go:242] writing updated cluster config ...
	I0821 03:40:32.438394    1442 ssh_runner.go:195] Run: rm -f paused
	I0821 03:40:32.505190    1442 start.go:600] kubectl: 1.27.2, cluster: 1.27.4 (minor skew: 0)
	I0821 03:40:32.509248    1442 out.go:177] * Done! kubectl is now configured to use "addons-500000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Mon 2023-08-21 10:34:00 UTC, ends at Mon 2023-08-21 11:04:33 UTC. --
	Aug 21 11:02:39 addons-500000 dockerd[1153]: time="2023-08-21T11:02:39.273857692Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 21 11:02:55 addons-500000 dockerd[1153]: time="2023-08-21T11:02:55.543938415Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 21 11:02:55 addons-500000 dockerd[1153]: time="2023-08-21T11:02:55.543990748Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 21 11:02:55 addons-500000 dockerd[1153]: time="2023-08-21T11:02:55.544013498Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 21 11:02:55 addons-500000 dockerd[1153]: time="2023-08-21T11:02:55.544021373Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 21 11:02:55 addons-500000 dockerd[1148]: time="2023-08-21T11:02:55.578525191Z" level=info msg="ignoring event" container=c0a0c21e7fc373fff20c0a42b48ce36406dced0b381d15ac7b0f6ca174b5c710 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 21 11:02:55 addons-500000 dockerd[1153]: time="2023-08-21T11:02:55.578775982Z" level=info msg="shim disconnected" id=c0a0c21e7fc373fff20c0a42b48ce36406dced0b381d15ac7b0f6ca174b5c710 namespace=moby
	Aug 21 11:02:55 addons-500000 dockerd[1153]: time="2023-08-21T11:02:55.578806607Z" level=warning msg="cleaning up after shim disconnected" id=c0a0c21e7fc373fff20c0a42b48ce36406dced0b381d15ac7b0f6ca174b5c710 namespace=moby
	Aug 21 11:02:55 addons-500000 dockerd[1153]: time="2023-08-21T11:02:55.578811107Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 21 11:03:23 addons-500000 dockerd[1153]: time="2023-08-21T11:03:23.509437266Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 21 11:03:23 addons-500000 dockerd[1153]: time="2023-08-21T11:03:23.509551016Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 21 11:03:23 addons-500000 dockerd[1153]: time="2023-08-21T11:03:23.509568308Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 21 11:03:23 addons-500000 dockerd[1153]: time="2023-08-21T11:03:23.509597975Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 21 11:03:23 addons-500000 dockerd[1153]: time="2023-08-21T11:03:23.558690691Z" level=info msg="shim disconnected" id=3be05fbf0cc31cea31bd6608e73f739322366f47f5140bf40cb7b7b636df753b namespace=moby
	Aug 21 11:03:23 addons-500000 dockerd[1153]: time="2023-08-21T11:03:23.559043400Z" level=warning msg="cleaning up after shim disconnected" id=3be05fbf0cc31cea31bd6608e73f739322366f47f5140bf40cb7b7b636df753b namespace=moby
	Aug 21 11:03:23 addons-500000 dockerd[1153]: time="2023-08-21T11:03:23.559053941Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 21 11:03:23 addons-500000 dockerd[1148]: time="2023-08-21T11:03:23.559264358Z" level=info msg="ignoring event" container=3be05fbf0cc31cea31bd6608e73f739322366f47f5140bf40cb7b7b636df753b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 21 11:04:15 addons-500000 dockerd[1153]: time="2023-08-21T11:04:15.513634674Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 21 11:04:15 addons-500000 dockerd[1153]: time="2023-08-21T11:04:15.513723757Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 21 11:04:15 addons-500000 dockerd[1153]: time="2023-08-21T11:04:15.513736257Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 21 11:04:15 addons-500000 dockerd[1153]: time="2023-08-21T11:04:15.513744257Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 21 11:04:15 addons-500000 dockerd[1148]: time="2023-08-21T11:04:15.557393445Z" level=info msg="ignoring event" container=61cb73773eecc3faafe56084535ad2d59c6b1097346767deab59c844d247f185 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 21 11:04:15 addons-500000 dockerd[1153]: time="2023-08-21T11:04:15.557558654Z" level=info msg="shim disconnected" id=61cb73773eecc3faafe56084535ad2d59c6b1097346767deab59c844d247f185 namespace=moby
	Aug 21 11:04:15 addons-500000 dockerd[1153]: time="2023-08-21T11:04:15.557585820Z" level=warning msg="cleaning up after shim disconnected" id=61cb73773eecc3faafe56084535ad2d59c6b1097346767deab59c844d247f185 namespace=moby
	Aug 21 11:04:15 addons-500000 dockerd[1153]: time="2023-08-21T11:04:15.557590195Z" level=info msg="cleaning up dead shim" namespace=moby
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                         ATTEMPT             POD ID
	61cb73773eecc       13753a81eccfd                                                                                                                18 seconds ago      Exited              hello-world-app              4                   a244270f71415
	12742b2537ff1       nginx@sha256:cac882be2b7305e0c8d3e3cd0575a2fd58f5fde6dd5d6299605aa0f3e67ca385                                                2 minutes ago       Running             nginx                        0                   ca7496b30bdd4
	734d7d69c9e8b       registry.k8s.io/ingress-nginx/controller@sha256:e5c4824e7375fcf2a393e1c03c293b69759af37a9ca6abdb91b13d78a93da8bd             29 minutes ago      Running             controller                   0                   bbb4a4c960656
	dbe5746b118a6       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf                 29 minutes ago      Running             gcp-auth                     0                   31154fc41fc35
	fc5767357c5d9       8f2588812ab29                                                                                                                29 minutes ago      Exited              patch                        1                   0538e79b5c883
	aa7d89a7d68d0       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:543c40fd093964bc9ab509d3e791f9989963021f1e9e4c9c7b6700b02bfb227b   29 minutes ago      Exited              create                       0                   3c078f4b9885e
	7979593c9bb52       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280      29 minutes ago      Running             volume-snapshot-controller   0                   70a68685a69fb
	fe9609fabef21       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280      29 minutes ago      Running             volume-snapshot-controller   0                   39eda7944d576
	16cfb4c805080       97e04611ad434                                                                                                                30 minutes ago      Running             coredns                      0                   b6fa8f87ea743
	36558206e7ebf       532e5a30e948f                                                                                                                30 minutes ago      Running             kube-proxy                   0                   ccc8633d52ca6
	bd48baf71b163       6eb63895cb67f                                                                                                                30 minutes ago      Running             kube-scheduler               0                   65c9ea48d27ae
	27dc2c0d7a4a5       24bc64e911039                                                                                                                30 minutes ago      Running             etcd                         0                   0f2cdc52bbda6
	dc949a6ce14c1       64aece92d6bde                                                                                                                30 minutes ago      Running             kube-apiserver               0                   090daa0e10080
	41982c5e9fc8f       389f6f052cf83                                                                                                                30 minutes ago      Running             kube-controller-manager      0                   a9c3d15b86bf8
	
	* 
	* ==> controller_ingress [734d7d69c9e8] <==
	* 10.244.0.1 - - [21/Aug/2023:11:02:36 +0000] "GET / HTTP/1.1" 200 615 "-" "curl/7.79.1" 81 0.001 [default-nginx-80] [] 10.244.0.12:80 615 0.001 200 95299bb71fe9816a7d82e8b3e20749a8
	I0821 11:02:26.741615       6 controller.go:190] "Configuration changes detected, backend reload required"
	I0821 11:02:26.743056       6 event.go:285] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"nginx-ingress", UID:"a0653b00-a1ff-4e5c-9176-ce66cb7d62ef", APIVersion:"networking.k8s.io/v1", ResourceVersion:"1725", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
	I0821 11:02:26.771277       6 controller.go:207] "Backend successfully reloaded"
	I0821 11:02:26.771531       6 event.go:285] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7799c6795f-4ppd9", UID:"c950764c-9601-4c76-adb3-ddb61bd6335d", APIVersion:"v1", ResourceVersion:"458", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	W0821 11:02:30.075242       6 controller.go:1207] Service "default/nginx" does not have any active Endpoint.
	I0821 11:02:30.075305       6 controller.go:190] "Configuration changes detected, backend reload required"
	I0821 11:02:30.115310       6 controller.go:207] "Backend successfully reloaded"
	I0821 11:02:30.115514       6 event.go:285] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7799c6795f-4ppd9", UID:"c950764c-9601-4c76-adb3-ddb61bd6335d", APIVersion:"v1", ResourceVersion:"458", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	W0821 11:02:37.094182       6 controller.go:1100] Error obtaining Endpoints for Service "kube-system/hello-world-app": no object matching key "kube-system/hello-world-app" in local store
	I0821 11:02:37.108823       6 admission.go:149] processed ingress via admission controller {testedIngressLength:2 testedIngressTime:0.014s renderingIngressLength:2 renderingIngressTime:0.007s admissionTime:25.8kBs testedConfigurationSize:0.021}
	I0821 11:02:37.108844       6 main.go:110] "successfully validated configuration, accepting" ingress="kube-system/example-ingress"
	I0821 11:02:37.112759       6 store.go:432] "Found valid IngressClass" ingress="kube-system/example-ingress" ingressclass="nginx"
	W0821 11:02:37.113082       6 controller.go:1100] Error obtaining Endpoints for Service "kube-system/hello-world-app": no object matching key "kube-system/hello-world-app" in local store
	I0821 11:02:37.113161       6 controller.go:190] "Configuration changes detected, backend reload required"
	I0821 11:02:37.115004       6 event.go:285] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"kube-system", Name:"example-ingress", UID:"1e796d1f-621b-4265-9e26-795ee454cc5a", APIVersion:"networking.k8s.io/v1", ResourceVersion:"1763", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
	I0821 11:02:37.149466       6 controller.go:207] "Backend successfully reloaded"
	I0821 11:02:37.149953       6 event.go:285] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7799c6795f-4ppd9", UID:"c950764c-9601-4c76-adb3-ddb61bd6335d", APIVersion:"v1", ResourceVersion:"458", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	I0821 11:02:40.450249       6 controller.go:190] "Configuration changes detected, backend reload required"
	I0821 11:02:40.511656       6 controller.go:207] "Backend successfully reloaded"
	I0821 11:02:40.512270       6 event.go:285] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7799c6795f-4ppd9", UID:"c950764c-9601-4c76-adb3-ddb61bd6335d", APIVersion:"v1", ResourceVersion:"458", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	I0821 11:02:54.628076       6 status.go:300] "updating Ingress status" namespace="default" ingress="nginx-ingress" currentValue=null newValue=[{"ip":"192.168.105.2"}]
	I0821 11:02:54.628097       6 status.go:300] "updating Ingress status" namespace="kube-system" ingress="example-ingress" currentValue=null newValue=[{"ip":"192.168.105.2"}]
	I0821 11:02:54.636536       6 event.go:285] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"kube-system", Name:"example-ingress", UID:"1e796d1f-621b-4265-9e26-795ee454cc5a", APIVersion:"networking.k8s.io/v1", ResourceVersion:"1798", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
	I0821 11:02:54.636758       6 event.go:285] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"nginx-ingress", UID:"a0653b00-a1ff-4e5c-9176-ce66cb7d62ef", APIVersion:"networking.k8s.io/v1", ResourceVersion:"1799", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
	
	* 
	* ==> coredns [16cfb4c80508] <==
	* [INFO] 10.244.0.11:55380 - 15444 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000192417s
	[INFO] 10.244.0.11:55595 - 33986 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000080917s
	[INFO] 10.244.0.11:55380 - 36243 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000177876s
	[INFO] 10.244.0.11:55380 - 42834 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000146333s
	[INFO] 10.244.0.11:55595 - 5784 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00011875s
	[INFO] 10.244.0.11:55595 - 56910 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000050292s
	[INFO] 10.244.0.11:55380 - 35306 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000218333s
	[INFO] 10.244.0.11:55595 - 64077 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000055958s
	[INFO] 10.244.0.11:55595 - 56884 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000076625s
	[INFO] 10.244.0.11:55595 - 56007 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000070583s
	[INFO] 10.244.0.11:55595 - 54545 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000067333s
	[INFO] 10.244.0.11:51497 - 59355 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000398834s
	[INFO] 10.244.0.11:51497 - 38991 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000209708s
	[INFO] 10.244.0.11:51497 - 6555 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000191958s
	[INFO] 10.244.0.11:51497 - 63288 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000409876s
	[INFO] 10.244.0.11:51497 - 49529 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00012975s
	[INFO] 10.244.0.11:51497 - 3686 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000123626s
	[INFO] 10.244.0.11:51497 - 19423 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000240209s
	[INFO] 10.244.0.11:59481 - 42442 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000222709s
	[INFO] 10.244.0.11:59481 - 36904 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.0001005s
	[INFO] 10.244.0.11:59481 - 14729 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000057417s
	[INFO] 10.244.0.11:59481 - 55234 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000074708s
	[INFO] 10.244.0.11:59481 - 58225 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000045917s
	[INFO] 10.244.0.11:59481 - 23418 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00004575s
	[INFO] 10.244.0.11:59481 - 13624 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000090417s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-500000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-500000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43
	                    minikube.k8s.io/name=addons-500000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_08_21T03_34_19_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 21 Aug 2023 10:34:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-500000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 21 Aug 2023 11:04:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 21 Aug 2023 11:02:58 +0000   Mon, 21 Aug 2023 10:34:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 21 Aug 2023 11:02:58 +0000   Mon, 21 Aug 2023 10:34:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 21 Aug 2023 11:02:58 +0000   Mon, 21 Aug 2023 10:34:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 21 Aug 2023 11:02:58 +0000   Mon, 21 Aug 2023 10:34:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.2
	  Hostname:    addons-500000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	System Info:
	  Machine ID:                 0e4a1f71467c44c8a10eca186773afe2
	  System UUID:                0e4a1f71467c44c8a10eca186773afe2
	  Boot ID:                    6d5e7ffc-fb7d-41fe-b076-69fd8535d300
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.4
	  Kubelet Version:            v1.27.4
	  Kube-Proxy Version:         v1.27.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-65bdb79f98-l7sq4             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         117s
	  default                     nginx                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m8s
	  gcp-auth                    gcp-auth-58478865f7-zcg47                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  ingress-nginx               ingress-nginx-controller-7799c6795f-4ppd9    100m (5%!)(MISSING)     0 (0%!)(MISSING)      90Mi (2%!)(MISSING)        0 (0%!)(MISSING)         30m
	  kube-system                 coredns-5d78c9869d-hbg44                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     30m
	  kube-system                 etcd-addons-500000                           100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         30m
	  kube-system                 kube-apiserver-addons-500000                 250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         30m
	  kube-system                 kube-controller-manager-addons-500000        200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         30m
	  kube-system                 kube-proxy-z2wj9                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         30m
	  kube-system                 kube-scheduler-addons-500000                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         30m
	  kube-system                 snapshot-controller-75bbb956b9-4pgqh         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         30m
	  kube-system                 snapshot-controller-75bbb956b9-j9mkf         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         30m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  0 (0%!)(MISSING)
	  memory             260Mi (6%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 30m   kube-proxy       
	  Normal  Starting                 30m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  30m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  30m   kubelet          Node addons-500000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    30m   kubelet          Node addons-500000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     30m   kubelet          Node addons-500000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                30m   kubelet          Node addons-500000 status is now: NodeReady
	  Normal  RegisteredNode           30m   node-controller  Node addons-500000 event: Registered Node addons-500000 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000000] KASLR disabled due to lack of seed
	[  +0.638012] EINJ: EINJ table not found.
	[  +0.490829] systemd-fstab-generator[117]: Ignoring "noauto" for root device
	[  +0.044680] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000871] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[Aug21 10:34] systemd-fstab-generator[479]: Ignoring "noauto" for root device
	[  +0.063431] systemd-fstab-generator[490]: Ignoring "noauto" for root device
	[  +0.413293] systemd-fstab-generator[750]: Ignoring "noauto" for root device
	[  +0.194883] systemd-fstab-generator[786]: Ignoring "noauto" for root device
	[  +0.079334] systemd-fstab-generator[797]: Ignoring "noauto" for root device
	[  +0.075319] systemd-fstab-generator[810]: Ignoring "noauto" for root device
	[  +1.241580] systemd-fstab-generator[968]: Ignoring "noauto" for root device
	[  +0.080868] systemd-fstab-generator[979]: Ignoring "noauto" for root device
	[  +0.070572] systemd-fstab-generator[990]: Ignoring "noauto" for root device
	[  +0.067357] systemd-fstab-generator[1001]: Ignoring "noauto" for root device
	[  +0.069942] systemd-fstab-generator[1042]: Ignoring "noauto" for root device
	[  +2.503453] systemd-fstab-generator[1141]: Ignoring "noauto" for root device
	[  +2.381640] kauditd_printk_skb: 53 callbacks suppressed
	[  +2.661766] systemd-fstab-generator[1457]: Ignoring "noauto" for root device
	[  +5.156537] systemd-fstab-generator[2350]: Ignoring "noauto" for root device
	[ +13.738428] kauditd_printk_skb: 41 callbacks suppressed
	[  +1.700338] TCP: eth0: Driver has suspect GRO implementation, TCP performance may be compromised.
	[  +4.800757] kauditd_printk_skb: 48 callbacks suppressed
	[ +14.143799] kauditd_printk_skb: 54 callbacks suppressed
	[Aug21 11:02] kauditd_printk_skb: 1 callbacks suppressed
	
	* 
	* ==> etcd [27dc2c0d7a4a] <==
	* {"level":"info","ts":"2023-08-21T10:34:15.991Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-21T10:34:15.991Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-08-21T10:34:15.991Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-08-21T10:34:15.992Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-08-21T10:34:16.003Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-08-21T10:34:15.992Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-08-21T10:34:16.003Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.105.2:2379"}
	{"level":"info","ts":"2023-08-21T10:34:15.992Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6e03e7863b4f9c54","local-member-id":"c46d288d2fcb0590","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-21T10:34:16.003Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-21T10:34:16.003Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-21T10:44:16.025Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":841}
	{"level":"info","ts":"2023-08-21T10:44:16.028Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":841,"took":"2.672822ms","hash":3376273956}
	{"level":"info","ts":"2023-08-21T10:44:16.028Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3376273956,"revision":841,"compact-revision":-1}
	{"level":"info","ts":"2023-08-21T10:49:16.035Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1031}
	{"level":"info","ts":"2023-08-21T10:49:16.038Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1031,"took":"1.375633ms","hash":1895539758}
	{"level":"info","ts":"2023-08-21T10:49:16.038Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1895539758,"revision":1031,"compact-revision":841}
	{"level":"info","ts":"2023-08-21T10:54:16.045Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1222}
	{"level":"info","ts":"2023-08-21T10:54:16.047Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1222,"took":"1.459351ms","hash":3279763987}
	{"level":"info","ts":"2023-08-21T10:54:16.047Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3279763987,"revision":1222,"compact-revision":1031}
	{"level":"info","ts":"2023-08-21T10:59:16.058Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1413}
	{"level":"info","ts":"2023-08-21T10:59:16.061Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1413,"took":"1.488371ms","hash":1268235317}
	{"level":"info","ts":"2023-08-21T10:59:16.061Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1268235317,"revision":1413,"compact-revision":1222}
	{"level":"info","ts":"2023-08-21T11:04:16.067Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1603}
	{"level":"info","ts":"2023-08-21T11:04:16.069Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1603,"took":"1.243127ms","hash":1670643557}
	{"level":"info","ts":"2023-08-21T11:04:16.070Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1670643557,"revision":1603,"compact-revision":1413}
	
	* 
	* ==> gcp-auth [dbe5746b118a] <==
	* 2023/08/21 10:34:42 GCP Auth Webhook started!
	2023/08/21 11:02:26 Ready to marshal response ...
	2023/08/21 11:02:26 Ready to write response ...
	2023/08/21 11:02:37 Ready to marshal response ...
	2023/08/21 11:02:37 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  11:04:34 up 30 min,  0 users,  load average: 0.37, 0.44, 0.35
	Linux addons-500000 5.10.57 #1 SMP PREEMPT Fri Jul 14 22:49:12 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [dc949a6ce14c] <==
	* I0821 10:49:16.759393       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0821 10:49:16.759510       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0821 10:49:16.766063       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0821 10:49:16.766169       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0821 10:54:16.749624       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0821 10:54:16.750123       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0821 10:54:16.755478       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0821 10:54:16.755644       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0821 10:54:16.765351       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0821 10:54:16.765428       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0821 10:59:16.750519       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0821 10:59:16.751153       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0821 10:59:16.751904       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0821 10:59:16.752113       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0821 10:59:16.761892       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0821 10:59:16.761965       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0821 11:02:26.738684       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I0821 11:02:26.869600       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs=map[IPv4:10.111.106.162]
	I0821 11:02:37.171860       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs=map[IPv4:10.102.172.159]
	I0821 11:04:16.751175       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0821 11:04:16.751671       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0821 11:04:16.751839       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0821 11:04:16.751936       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0821 11:04:16.752119       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0821 11:04:16.752232       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	
	* 
	* ==> kube-controller-manager [41982c5e9fc8] <==
	* I0821 10:34:42.737082       1 event.go:307] "Event occurred" object="ingress-nginx/ingress-nginx-admission-patch" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0821 10:34:42.747456       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0821 10:34:42.752783       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0821 10:34:42.756485       1 job_controller.go:523] enqueueing job ingress-nginx/ingress-nginx-admission-patch
	I0821 10:34:42.854473       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0821 10:34:42.856753       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0821 10:34:42.858553       1 event.go:307] "Event occurred" object="gcp-auth/gcp-auth-certs-patch" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0821 10:34:42.858609       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0821 10:34:42.859646       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0821 10:34:42.893612       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0821 10:34:42.895861       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0821 10:34:42.897862       1 event.go:307] "Event occurred" object="gcp-auth/gcp-auth-certs-create" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0821 10:34:42.897954       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0821 10:34:42.899189       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0821 10:35:01.688712       1 resource_quota_monitor.go:223] "QuotaMonitor created object count evaluator" resource="volumesnapshots.snapshot.storage.k8s.io"
	I0821 10:35:01.688853       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0821 10:35:01.789717       1 shared_informer.go:318] Caches are synced for resource quota
	I0821 10:35:02.109377       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0821 10:35:02.210585       1 shared_informer.go:318] Caches are synced for garbage collector
	I0821 10:35:12.010356       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0821 10:35:12.011197       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0821 10:35:12.022044       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0821 10:35:12.024702       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0821 11:02:37.084707       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-65bdb79f98 to 1"
	I0821 11:02:37.090750       1 event.go:307] "Event occurred" object="default/hello-world-app-65bdb79f98" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-65bdb79f98-l7sq4"
	
	* 
	* ==> kube-proxy [36558206e7eb] <==
	* I0821 10:34:32.961845       1 node.go:141] Successfully retrieved node IP: 192.168.105.2
	I0821 10:34:32.961903       1 server_others.go:110] "Detected node IP" address="192.168.105.2"
	I0821 10:34:32.961922       1 server_others.go:554] "Using iptables proxy"
	I0821 10:34:32.984111       1 server_others.go:178] "kube-proxy running in single-stack mode: secondary ipFamily is not supported" ipFamily=IPv6
	I0821 10:34:32.984124       1 server_others.go:192] "Using iptables Proxier"
	I0821 10:34:32.984147       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0821 10:34:32.984347       1 server.go:658] "Version info" version="v1.27.4"
	I0821 10:34:32.984357       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0821 10:34:32.984958       1 config.go:315] "Starting node config controller"
	I0821 10:34:32.984965       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0821 10:34:32.985291       1 config.go:188] "Starting service config controller"
	I0821 10:34:32.985295       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0821 10:34:32.985301       1 config.go:97] "Starting endpoint slice config controller"
	I0821 10:34:32.985318       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0821 10:34:33.085576       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0821 10:34:33.085604       1 shared_informer.go:318] Caches are synced for node config
	I0821 10:34:33.085608       1 shared_informer.go:318] Caches are synced for service config
	
	* 
	* ==> kube-scheduler [bd48baf71b16] <==
	* W0821 10:34:16.768490       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0821 10:34:16.768493       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0821 10:34:16.768508       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0821 10:34:16.768511       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0821 10:34:16.768562       1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0821 10:34:16.768566       1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0821 10:34:17.606010       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0821 10:34:17.606029       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0821 10:34:17.645166       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0821 10:34:17.645193       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0821 10:34:17.674598       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0821 10:34:17.674623       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0821 10:34:17.707767       1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0821 10:34:17.707781       1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0821 10:34:17.724040       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0821 10:34:17.724057       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0821 10:34:17.728085       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0821 10:34:17.728146       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0821 10:34:17.756871       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0821 10:34:17.756889       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0821 10:34:17.785527       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0821 10:34:17.785576       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0821 10:34:17.785527       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0821 10:34:17.785647       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0821 10:34:20.949364       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-08-21 10:34:00 UTC, ends at Mon 2023-08-21 11:04:34 UTC. --
	Aug 21 11:03:19 addons-500000 kubelet[2369]: E0821 11:03:19.565677    2369 iptables.go:575] "Could not set up iptables canary" err=<
	Aug 21 11:03:19 addons-500000 kubelet[2369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 21 11:03:19 addons-500000 kubelet[2369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 21 11:03:19 addons-500000 kubelet[2369]:  > table=nat chain=KUBE-KUBELET-CANARY
	Aug 21 11:03:23 addons-500000 kubelet[2369]: I0821 11:03:23.452056    2369 scope.go:115] "RemoveContainer" containerID="c0a0c21e7fc373fff20c0a42b48ce36406dced0b381d15ac7b0f6ca174b5c710"
	Aug 21 11:03:23 addons-500000 kubelet[2369]: I0821 11:03:23.941590    2369 scope.go:115] "RemoveContainer" containerID="c0a0c21e7fc373fff20c0a42b48ce36406dced0b381d15ac7b0f6ca174b5c710"
	Aug 21 11:03:23 addons-500000 kubelet[2369]: I0821 11:03:23.942308    2369 scope.go:115] "RemoveContainer" containerID="3be05fbf0cc31cea31bd6608e73f739322366f47f5140bf40cb7b7b636df753b"
	Aug 21 11:03:23 addons-500000 kubelet[2369]: E0821 11:03:23.942638    2369 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 40s restarting failed container=hello-world-app pod=hello-world-app-65bdb79f98-l7sq4_default(03900f9a-54f5-4d53-8e78-2fb31aa983b5)\"" pod="default/hello-world-app-65bdb79f98-l7sq4" podUID=03900f9a-54f5-4d53-8e78-2fb31aa983b5
	Aug 21 11:03:36 addons-500000 kubelet[2369]: I0821 11:03:36.459541    2369 scope.go:115] "RemoveContainer" containerID="3be05fbf0cc31cea31bd6608e73f739322366f47f5140bf40cb7b7b636df753b"
	Aug 21 11:03:36 addons-500000 kubelet[2369]: E0821 11:03:36.460237    2369 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 40s restarting failed container=hello-world-app pod=hello-world-app-65bdb79f98-l7sq4_default(03900f9a-54f5-4d53-8e78-2fb31aa983b5)\"" pod="default/hello-world-app-65bdb79f98-l7sq4" podUID=03900f9a-54f5-4d53-8e78-2fb31aa983b5
	Aug 21 11:03:48 addons-500000 kubelet[2369]: I0821 11:03:48.454293    2369 scope.go:115] "RemoveContainer" containerID="3be05fbf0cc31cea31bd6608e73f739322366f47f5140bf40cb7b7b636df753b"
	Aug 21 11:03:48 addons-500000 kubelet[2369]: E0821 11:03:48.454999    2369 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 40s restarting failed container=hello-world-app pod=hello-world-app-65bdb79f98-l7sq4_default(03900f9a-54f5-4d53-8e78-2fb31aa983b5)\"" pod="default/hello-world-app-65bdb79f98-l7sq4" podUID=03900f9a-54f5-4d53-8e78-2fb31aa983b5
	Aug 21 11:04:01 addons-500000 kubelet[2369]: I0821 11:04:01.453746    2369 scope.go:115] "RemoveContainer" containerID="3be05fbf0cc31cea31bd6608e73f739322366f47f5140bf40cb7b7b636df753b"
	Aug 21 11:04:01 addons-500000 kubelet[2369]: E0821 11:04:01.454850    2369 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 40s restarting failed container=hello-world-app pod=hello-world-app-65bdb79f98-l7sq4_default(03900f9a-54f5-4d53-8e78-2fb31aa983b5)\"" pod="default/hello-world-app-65bdb79f98-l7sq4" podUID=03900f9a-54f5-4d53-8e78-2fb31aa983b5
	Aug 21 11:04:15 addons-500000 kubelet[2369]: I0821 11:04:15.452827    2369 scope.go:115] "RemoveContainer" containerID="3be05fbf0cc31cea31bd6608e73f739322366f47f5140bf40cb7b7b636df753b"
	Aug 21 11:04:15 addons-500000 kubelet[2369]: I0821 11:04:15.766170    2369 scope.go:115] "RemoveContainer" containerID="3be05fbf0cc31cea31bd6608e73f739322366f47f5140bf40cb7b7b636df753b"
	Aug 21 11:04:15 addons-500000 kubelet[2369]: I0821 11:04:15.766354    2369 scope.go:115] "RemoveContainer" containerID="61cb73773eecc3faafe56084535ad2d59c6b1097346767deab59c844d247f185"
	Aug 21 11:04:15 addons-500000 kubelet[2369]: E0821 11:04:15.766482    2369 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=hello-world-app pod=hello-world-app-65bdb79f98-l7sq4_default(03900f9a-54f5-4d53-8e78-2fb31aa983b5)\"" pod="default/hello-world-app-65bdb79f98-l7sq4" podUID=03900f9a-54f5-4d53-8e78-2fb31aa983b5
	Aug 21 11:04:19 addons-500000 kubelet[2369]: W0821 11:04:19.453336    2369 machine.go:65] Cannot read vendor id correctly, set empty.
	Aug 21 11:04:19 addons-500000 kubelet[2369]: E0821 11:04:19.565895    2369 iptables.go:575] "Could not set up iptables canary" err=<
	Aug 21 11:04:19 addons-500000 kubelet[2369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 21 11:04:19 addons-500000 kubelet[2369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 21 11:04:19 addons-500000 kubelet[2369]:  > table=nat chain=KUBE-KUBELET-CANARY
	Aug 21 11:04:30 addons-500000 kubelet[2369]: I0821 11:04:30.452665    2369 scope.go:115] "RemoveContainer" containerID="61cb73773eecc3faafe56084535ad2d59c6b1097346767deab59c844d247f185"
	Aug 21 11:04:30 addons-500000 kubelet[2369]: E0821 11:04:30.456370    2369 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=hello-world-app pod=hello-world-app-65bdb79f98-l7sq4_default(03900f9a-54f5-4d53-8e78-2fb31aa983b5)\"" pod="default/hello-world-app-65bdb79f98-l7sq4" podUID=03900f9a-54f5-4d53-8e78-2fb31aa983b5
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p addons-500000 -n addons-500000
helpers_test.go:261: (dbg) Run:  kubectl --context addons-500000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-cxgb2 ingress-nginx-admission-patch-fkwhp
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/MetricsServer]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-500000 describe pod ingress-nginx-admission-create-cxgb2 ingress-nginx-admission-patch-fkwhp
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-500000 describe pod ingress-nginx-admission-create-cxgb2 ingress-nginx-admission-patch-fkwhp: exit status 1 (35.478334ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-cxgb2" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-fkwhp" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-500000 describe pod ingress-nginx-admission-create-cxgb2 ingress-nginx-admission-patch-fkwhp: exit status 1
--- FAIL: TestAddons/parallel/MetricsServer (720.90s)

                                                
                                    
x
+
TestAddons/parallel/CSI (545.96s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:535: failed waiting for csi-hostpath-driver pods to stabilize: context deadline exceeded
addons_test.go:537: csi-hostpath-driver pods stabilized in 6m0.000559541s
addons_test.go:540: (dbg) Run:  kubectl --context addons-500000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:545: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-500000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:546: failed waiting for PVC hpvc: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p addons-500000 -n addons-500000
helpers_test.go:244: <<< TestAddons/parallel/CSI FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/CSI]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p addons-500000 logs -n 25
helpers_test.go:252: TestAddons/parallel/CSI logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-670000 | jenkins | v1.31.2 | 21 Aug 23 03:33 PDT |                     |
	|         | -p download-only-670000           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=qemu2                    |                      |         |         |                     |                     |
	| start   | -o=json --download-only           | download-only-670000 | jenkins | v1.31.2 | 21 Aug 23 03:33 PDT |                     |
	|         | -p download-only-670000           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.4      |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=qemu2                    |                      |         |         |                     |                     |
	| start   | -o=json --download-only           | download-only-670000 | jenkins | v1.31.2 | 21 Aug 23 03:33 PDT |                     |
	|         | -p download-only-670000           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.0-rc.1 |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=qemu2                    |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.31.2 | 21 Aug 23 03:33 PDT | 21 Aug 23 03:33 PDT |
	| delete  | -p download-only-670000           | download-only-670000 | jenkins | v1.31.2 | 21 Aug 23 03:33 PDT | 21 Aug 23 03:33 PDT |
	| delete  | -p download-only-670000           | download-only-670000 | jenkins | v1.31.2 | 21 Aug 23 03:33 PDT | 21 Aug 23 03:33 PDT |
	| start   | --download-only -p                | binary-mirror-462000 | jenkins | v1.31.2 | 21 Aug 23 03:33 PDT |                     |
	|         | binary-mirror-462000              |                      |         |         |                     |                     |
	|         | --alsologtostderr                 |                      |         |         |                     |                     |
	|         | --binary-mirror                   |                      |         |         |                     |                     |
	|         | http://127.0.0.1:49329            |                      |         |         |                     |                     |
	|         | --driver=qemu2                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-462000           | binary-mirror-462000 | jenkins | v1.31.2 | 21 Aug 23 03:33 PDT | 21 Aug 23 03:33 PDT |
	| start   | -p addons-500000                  | addons-500000        | jenkins | v1.31.2 | 21 Aug 23 03:33 PDT | 21 Aug 23 03:40 PDT |
	|         | --wait=true --memory=4000         |                      |         |         |                     |                     |
	|         | --alsologtostderr                 |                      |         |         |                     |                     |
	|         | --addons=registry                 |                      |         |         |                     |                     |
	|         | --addons=metrics-server           |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots          |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver      |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                 |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner            |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget         |                      |         |         |                     |                     |
	|         | --driver=qemu2                    |                      |         |         |                     |                     |
	|         | --addons=ingress                  |                      |         |         |                     |                     |
	|         | --addons=ingress-dns              |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p          | addons-500000        | jenkins | v1.31.2 | 21 Aug 23 03:52 PDT |                     |
	|         | addons-500000                     |                      |         |         |                     |                     |
	| ssh     | addons-500000 ssh curl -s         | addons-500000        | jenkins | v1.31.2 | 21 Aug 23 04:02 PDT | 21 Aug 23 04:02 PDT |
	|         | http://127.0.0.1/ -H 'Host:       |                      |         |         |                     |                     |
	|         | nginx.example.com'                |                      |         |         |                     |                     |
	| ip      | addons-500000 ip                  | addons-500000        | jenkins | v1.31.2 | 21 Aug 23 04:02 PDT | 21 Aug 23 04:02 PDT |
	| addons  | addons-500000 addons disable      | addons-500000        | jenkins | v1.31.2 | 21 Aug 23 04:02 PDT |                     |
	|         | ingress-dns --alsologtostderr     |                      |         |         |                     |                     |
	|         | -v=1                              |                      |         |         |                     |                     |
	| addons  | enable headlamp                   | addons-500000        | jenkins | v1.31.2 | 21 Aug 23 04:04 PDT | 21 Aug 23 04:04 PDT |
	|         | -p addons-500000                  |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1            |                      |         |         |                     |                     |
	| addons  | addons-500000 addons disable      | addons-500000        | jenkins | v1.31.2 | 21 Aug 23 04:04 PDT | 21 Aug 23 04:04 PDT |
	|         | ingress --alsologtostderr -v=1    |                      |         |         |                     |                     |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/21 03:33:48
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.20.7 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0821 03:33:48.415064    1442 out.go:296] Setting OutFile to fd 1 ...
	I0821 03:33:48.415176    1442 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 03:33:48.415179    1442 out.go:309] Setting ErrFile to fd 2...
	I0821 03:33:48.415182    1442 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 03:33:48.415284    1442 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17102-920/.minikube/bin
	I0821 03:33:48.416485    1442 out.go:303] Setting JSON to false
	I0821 03:33:48.431675    1442 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":202,"bootTime":1692613826,"procs":392,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0821 03:33:48.431757    1442 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0821 03:33:48.436776    1442 out.go:177] * [addons-500000] minikube v1.31.2 on Darwin 13.5 (arm64)
	I0821 03:33:48.443786    1442 out.go:177]   - MINIKUBE_LOCATION=17102
	I0821 03:33:48.443817    1442 notify.go:220] Checking for updates...
	I0821 03:33:48.452754    1442 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17102-920/kubeconfig
	I0821 03:33:48.459793    1442 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0821 03:33:48.466761    1442 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0821 03:33:48.469754    1442 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17102-920/.minikube
	I0821 03:33:48.472801    1442 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0821 03:33:48.476845    1442 driver.go:373] Setting default libvirt URI to qemu:///system
	I0821 03:33:48.479685    1442 out.go:177] * Using the qemu2 driver based on user configuration
	I0821 03:33:48.486794    1442 start.go:298] selected driver: qemu2
	I0821 03:33:48.486801    1442 start.go:902] validating driver "qemu2" against <nil>
	I0821 03:33:48.486809    1442 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0821 03:33:48.488928    1442 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0821 03:33:48.491687    1442 out.go:177] * Automatically selected the socket_vmnet network
	I0821 03:33:48.495787    1442 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0821 03:33:48.495806    1442 cni.go:84] Creating CNI manager for ""
	I0821 03:33:48.495814    1442 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0821 03:33:48.495818    1442 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0821 03:33:48.495823    1442 start_flags.go:319] config:
	{Name:addons-500000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:addons-500000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:c
ni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0821 03:33:48.500226    1442 iso.go:125] acquiring lock: {Name:mk813ea611542195bb0511881888be3fabc72ff6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 03:33:48.506762    1442 out.go:177] * Starting control plane node addons-500000 in cluster addons-500000
	I0821 03:33:48.510761    1442 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime docker
	I0821 03:33:48.510781    1442 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17102-920/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4
	I0821 03:33:48.510799    1442 cache.go:57] Caching tarball of preloaded images
	I0821 03:33:48.510861    1442 preload.go:174] Found /Users/jenkins/minikube-integration/17102-920/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0821 03:33:48.510867    1442 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.4 on docker
	I0821 03:33:48.511057    1442 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/config.json ...
	I0821 03:33:48.511069    1442 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/config.json: {Name:mke6ea6a330608889e821054234e4dab41e05376 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 03:33:48.511283    1442 start.go:365] acquiring machines lock for addons-500000: {Name:mk9b32d9fe994be32d77812db464b2cfa7bfb400 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0821 03:33:48.511397    1442 start.go:369] acquired machines lock for "addons-500000" in 109.25µs
	I0821 03:33:48.511409    1442 start.go:93] Provisioning new machine with config: &{Name:addons-500000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:
addons-500000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0821 03:33:48.511444    1442 start.go:125] createHost starting for "" (driver="qemu2")
	I0821 03:33:48.515777    1442 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0821 03:33:48.825711    1442 start.go:159] libmachine.API.Create for "addons-500000" (driver="qemu2")
	I0821 03:33:48.825759    1442 client.go:168] LocalClient.Create starting
	I0821 03:33:48.825907    1442 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca.pem
	I0821 03:33:48.926786    1442 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/17102-920/.minikube/certs/cert.pem
	I0821 03:33:49.005435    1442 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17102-920/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17102-920/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0821 03:33:49.429478    1442 main.go:141] libmachine: Creating SSH key...
	I0821 03:33:49.603069    1442 main.go:141] libmachine: Creating Disk image...
	I0821 03:33:49.603078    1442 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0821 03:33:49.603290    1442 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/disk.qcow2.raw /Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/disk.qcow2
	I0821 03:33:49.637224    1442 main.go:141] libmachine: STDOUT: 
	I0821 03:33:49.637249    1442 main.go:141] libmachine: STDERR: 
	I0821 03:33:49.637377    1442 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/disk.qcow2 +20000M
	I0821 03:33:49.644766    1442 main.go:141] libmachine: STDOUT: Image resized.
	
	I0821 03:33:49.644778    1442 main.go:141] libmachine: STDERR: 
	I0821 03:33:49.644801    1442 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/disk.qcow2
	I0821 03:33:49.644808    1442 main.go:141] libmachine: Starting QEMU VM...
	I0821 03:33:49.644850    1442 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:15:38:20:81:6d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/disk.qcow2
	I0821 03:33:49.712858    1442 main.go:141] libmachine: STDOUT: 
	I0821 03:33:49.712896    1442 main.go:141] libmachine: STDERR: 
	I0821 03:33:49.712900    1442 main.go:141] libmachine: Attempt 0
	I0821 03:33:49.712923    1442 main.go:141] libmachine: Searching for 5e:15:38:20:81:6d in /var/db/dhcpd_leases ...
	I0821 03:33:51.714037    1442 main.go:141] libmachine: Attempt 1
	I0821 03:33:51.714122    1442 main.go:141] libmachine: Searching for 5e:15:38:20:81:6d in /var/db/dhcpd_leases ...
	I0821 03:33:53.715339    1442 main.go:141] libmachine: Attempt 2
	I0821 03:33:53.715370    1442 main.go:141] libmachine: Searching for 5e:15:38:20:81:6d in /var/db/dhcpd_leases ...
	I0821 03:33:55.716394    1442 main.go:141] libmachine: Attempt 3
	I0821 03:33:55.716406    1442 main.go:141] libmachine: Searching for 5e:15:38:20:81:6d in /var/db/dhcpd_leases ...
	I0821 03:33:57.717443    1442 main.go:141] libmachine: Attempt 4
	I0821 03:33:57.717472    1442 main.go:141] libmachine: Searching for 5e:15:38:20:81:6d in /var/db/dhcpd_leases ...
	I0821 03:33:59.718558    1442 main.go:141] libmachine: Attempt 5
	I0821 03:33:59.718579    1442 main.go:141] libmachine: Searching for 5e:15:38:20:81:6d in /var/db/dhcpd_leases ...
	I0821 03:34:01.719634    1442 main.go:141] libmachine: Attempt 6
	I0821 03:34:01.719657    1442 main.go:141] libmachine: Searching for 5e:15:38:20:81:6d in /var/db/dhcpd_leases ...
	I0821 03:34:01.719810    1442 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0821 03:34:01.719849    1442 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:5e:15:38:20:81:6d ID:1,5e:15:38:20:81:6d Lease:0x64e48f18}
	I0821 03:34:01.719855    1442 main.go:141] libmachine: Found match: 5e:15:38:20:81:6d
	I0821 03:34:01.719867    1442 main.go:141] libmachine: IP: 192.168.105.2
	I0821 03:34:01.719873    1442 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.2)...
	I0821 03:34:03.738025    1442 machine.go:88] provisioning docker machine ...
	I0821 03:34:03.738086    1442 buildroot.go:166] provisioning hostname "addons-500000"
	I0821 03:34:03.739549    1442 main.go:141] libmachine: Using SSH client type: native
	I0821 03:34:03.740347    1442 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102aae1e0] 0x102ab0c40 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0821 03:34:03.740367    1442 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-500000 && echo "addons-500000" | sudo tee /etc/hostname
	I0821 03:34:03.826570    1442 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-500000
	
	I0821 03:34:03.826696    1442 main.go:141] libmachine: Using SSH client type: native
	I0821 03:34:03.827174    1442 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102aae1e0] 0x102ab0c40 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0821 03:34:03.827189    1442 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-500000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-500000/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-500000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0821 03:34:03.891757    1442 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0821 03:34:03.891772    1442 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17102-920/.minikube CaCertPath:/Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17102-920/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17102-920/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17102-920/.minikube}
	I0821 03:34:03.891782    1442 buildroot.go:174] setting up certificates
	I0821 03:34:03.891796    1442 provision.go:83] configureAuth start
	I0821 03:34:03.891801    1442 provision.go:138] copyHostCerts
	I0821 03:34:03.891982    1442 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17102-920/.minikube/ca.pem (1078 bytes)
	I0821 03:34:03.892356    1442 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17102-920/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17102-920/.minikube/cert.pem (1123 bytes)
	I0821 03:34:03.892494    1442 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17102-920/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17102-920/.minikube/key.pem (1679 bytes)
	I0821 03:34:03.892606    1442 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17102-920/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca-key.pem org=jenkins.addons-500000 san=[192.168.105.2 192.168.105.2 localhost 127.0.0.1 minikube addons-500000]
	I0821 03:34:04.055231    1442 provision.go:172] copyRemoteCerts
	I0821 03:34:04.055290    1442 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0821 03:34:04.055299    1442 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/id_rsa Username:docker}
	I0821 03:34:04.085022    1442 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0821 03:34:04.091757    1442 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0821 03:34:04.098302    1442 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0821 03:34:04.105297    1442 provision.go:86] duration metric: configureAuth took 213.489792ms
	I0821 03:34:04.105304    1442 buildroot.go:189] setting minikube options for container-runtime
	I0821 03:34:04.105410    1442 config.go:182] Loaded profile config "addons-500000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0821 03:34:04.105443    1442 main.go:141] libmachine: Using SSH client type: native
	I0821 03:34:04.105658    1442 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102aae1e0] 0x102ab0c40 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0821 03:34:04.105665    1442 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0821 03:34:04.160033    1442 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0821 03:34:04.160039    1442 buildroot.go:70] root file system type: tmpfs
	I0821 03:34:04.160095    1442 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0821 03:34:04.160145    1442 main.go:141] libmachine: Using SSH client type: native
	I0821 03:34:04.160376    1442 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102aae1e0] 0x102ab0c40 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0821 03:34:04.160410    1442 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0821 03:34:04.217511    1442 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0821 03:34:04.217555    1442 main.go:141] libmachine: Using SSH client type: native
	I0821 03:34:04.217777    1442 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102aae1e0] 0x102ab0c40 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0821 03:34:04.217788    1442 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0821 03:34:04.516566    1442 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0821 03:34:04.516576    1442 machine.go:91] provisioned docker machine in 778.543875ms
	I0821 03:34:04.516581    1442 client.go:171] LocalClient.Create took 15.691254833s
	I0821 03:34:04.516600    1442 start.go:167] duration metric: libmachine.API.Create for "addons-500000" took 15.691329875s
	I0821 03:34:04.516605    1442 start.go:300] post-start starting for "addons-500000" (driver="qemu2")
	I0821 03:34:04.516610    1442 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0821 03:34:04.516676    1442 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0821 03:34:04.516684    1442 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/id_rsa Username:docker}
	I0821 03:34:04.547645    1442 ssh_runner.go:195] Run: cat /etc/os-release
	I0821 03:34:04.548977    1442 info.go:137] Remote host: Buildroot 2021.02.12
	I0821 03:34:04.548988    1442 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17102-920/.minikube/addons for local assets ...
	I0821 03:34:04.549067    1442 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17102-920/.minikube/files for local assets ...
	I0821 03:34:04.549094    1442 start.go:303] post-start completed in 32.487208ms
	I0821 03:34:04.549503    1442 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/config.json ...
	I0821 03:34:04.549671    1442 start.go:128] duration metric: createHost completed in 16.038665083s
	I0821 03:34:04.549713    1442 main.go:141] libmachine: Using SSH client type: native
	I0821 03:34:04.549937    1442 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102aae1e0] 0x102ab0c40 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0821 03:34:04.549942    1442 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0821 03:34:04.603319    1442 main.go:141] libmachine: SSH cmd err, output: <nil>: 1692614044.503149419
	
	I0821 03:34:04.603325    1442 fix.go:206] guest clock: 1692614044.503149419
	I0821 03:34:04.603329    1442 fix.go:219] Guest: 2023-08-21 03:34:04.503149419 -0700 PDT Remote: 2023-08-21 03:34:04.549674 -0700 PDT m=+16.153755168 (delta=-46.524581ms)
	I0821 03:34:04.603340    1442 fix.go:190] guest clock delta is within tolerance: -46.524581ms
	I0821 03:34:04.603349    1442 start.go:83] releasing machines lock for "addons-500000", held for 16.092394834s
	I0821 03:34:04.603625    1442 ssh_runner.go:195] Run: cat /version.json
	I0821 03:34:04.603635    1442 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/id_rsa Username:docker}
	I0821 03:34:04.603639    1442 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0821 03:34:04.603685    1442 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/id_rsa Username:docker}
	I0821 03:34:04.631400    1442 ssh_runner.go:195] Run: systemctl --version
	I0821 03:34:04.633303    1442 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0821 03:34:04.675003    1442 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0821 03:34:04.675044    1442 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0821 03:34:04.680093    1442 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0821 03:34:04.680102    1442 start.go:466] detecting cgroup driver to use...
	I0821 03:34:04.680217    1442 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0821 03:34:04.685575    1442 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0821 03:34:04.689003    1442 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0821 03:34:04.692463    1442 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0821 03:34:04.692496    1442 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0821 03:34:04.695492    1442 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0821 03:34:04.698438    1442 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0821 03:34:04.701779    1442 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0821 03:34:04.705308    1442 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0821 03:34:04.708997    1442 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0821 03:34:04.712485    1442 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0821 03:34:04.715157    1442 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0821 03:34:04.718062    1442 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0821 03:34:04.801182    1442 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0821 03:34:04.809752    1442 start.go:466] detecting cgroup driver to use...
	I0821 03:34:04.809829    1442 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0821 03:34:04.815491    1442 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0821 03:34:04.820439    1442 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0821 03:34:04.826330    1442 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0821 03:34:04.831197    1442 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0821 03:34:04.835955    1442 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0821 03:34:04.893707    1442 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0821 03:34:04.899704    1442 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0821 03:34:04.905738    1442 ssh_runner.go:195] Run: which cri-dockerd
	I0821 03:34:04.907314    1442 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0821 03:34:04.910018    1442 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0821 03:34:04.915159    1442 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0821 03:34:04.993497    1442 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0821 03:34:05.073322    1442 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0821 03:34:05.073337    1442 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0821 03:34:05.078736    1442 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0821 03:34:05.148942    1442 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0821 03:34:06.310888    1442 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.161962625s)
	I0821 03:34:06.310946    1442 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0821 03:34:06.389910    1442 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0821 03:34:06.470512    1442 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0821 03:34:06.540771    1442 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0821 03:34:06.608028    1442 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0821 03:34:06.614951    1442 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0821 03:34:06.680856    1442 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0821 03:34:06.705016    1442 start.go:513] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0821 03:34:06.705100    1442 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0821 03:34:06.707492    1442 start.go:534] Will wait 60s for crictl version
	I0821 03:34:06.707526    1442 ssh_runner.go:195] Run: which crictl
	I0821 03:34:06.708906    1442 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0821 03:34:06.723485    1442 start.go:550] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.4
	RuntimeApiVersion:  v1alpha2
	I0821 03:34:06.723553    1442 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0821 03:34:06.733136    1442 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0821 03:34:06.752243    1442 out.go:204] * Preparing Kubernetes v1.27.4 on Docker 24.0.4 ...
	I0821 03:34:06.752395    1442 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0821 03:34:06.753728    1442 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0821 03:34:06.757671    1442 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime docker
	I0821 03:34:06.757717    1442 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0821 03:34:06.767699    1442 docker.go:636] Got preloaded images: 
	I0821 03:34:06.767706    1442 docker.go:642] registry.k8s.io/kube-apiserver:v1.27.4 wasn't preloaded
	I0821 03:34:06.767758    1442 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0821 03:34:06.770623    1442 ssh_runner.go:195] Run: which lz4
	I0821 03:34:06.772016    1442 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0821 03:34:06.773407    1442 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0821 03:34:06.773426    1442 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (343658271 bytes)
	I0821 03:34:08.065715    1442 docker.go:600] Took 1.293779 seconds to copy over tarball
	I0821 03:34:08.065776    1442 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0821 03:34:09.083194    1442 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.017432542s)
	I0821 03:34:09.083208    1442 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0821 03:34:09.098174    1442 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0821 03:34:09.101758    1442 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0821 03:34:09.107271    1442 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0821 03:34:09.185186    1442 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0821 03:34:11.583398    1442 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.398262792s)
	I0821 03:34:11.583497    1442 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0821 03:34:11.599112    1442 docker.go:636] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.4
	registry.k8s.io/kube-controller-manager:v1.27.4
	registry.k8s.io/kube-scheduler:v1.27.4
	registry.k8s.io/kube-proxy:v1.27.4
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0821 03:34:11.599121    1442 cache_images.go:84] Images are preloaded, skipping loading
	I0821 03:34:11.599173    1442 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0821 03:34:11.606813    1442 cni.go:84] Creating CNI manager for ""
	I0821 03:34:11.606822    1442 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0821 03:34:11.606852    1442 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0821 03:34:11.606862    1442 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.2 APIServerPort:8443 KubernetesVersion:v1.27.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-500000 NodeName:addons-500000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0821 03:34:11.606930    1442 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-500000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0821 03:34:11.606959    1442 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=addons-500000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.4 ClusterName:addons-500000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0821 03:34:11.607013    1442 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.4
	I0821 03:34:11.609958    1442 binaries.go:44] Found k8s binaries, skipping transfer
	I0821 03:34:11.609992    1442 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0821 03:34:11.613080    1442 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I0821 03:34:11.618135    1442 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0821 03:34:11.623217    1442 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2099 bytes)
	I0821 03:34:11.628067    1442 ssh_runner.go:195] Run: grep 192.168.105.2	control-plane.minikube.internal$ /etc/hosts
	I0821 03:34:11.629338    1442 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0821 03:34:11.633264    1442 certs.go:56] Setting up /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000 for IP: 192.168.105.2
	I0821 03:34:11.633272    1442 certs.go:190] acquiring lock for shared ca certs: {Name:mkaf8bee91c9bef113528e728629bac5c142d5d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 03:34:11.633419    1442 certs.go:204] generating minikubeCA CA: /Users/jenkins/minikube-integration/17102-920/.minikube/ca.key
	I0821 03:34:11.709497    1442 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17102-920/.minikube/ca.crt ...
	I0821 03:34:11.709504    1442 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17102-920/.minikube/ca.crt: {Name:mk11304afc04d282dffa1bbfafecb7763b86f0d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 03:34:11.709741    1442 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17102-920/.minikube/ca.key ...
	I0821 03:34:11.709747    1442 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17102-920/.minikube/ca.key: {Name:mk7632addcfceaabe09bce428c8dd59051132a6a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 03:34:11.709856    1442 certs.go:204] generating proxyClientCA CA: /Users/jenkins/minikube-integration/17102-920/.minikube/proxy-client-ca.key
	I0821 03:34:11.928292    1442 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17102-920/.minikube/proxy-client-ca.crt ...
	I0821 03:34:11.928298    1442 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17102-920/.minikube/proxy-client-ca.crt: {Name:mk59ba2d6f1e462ee2e456d21a76e6acaba82b70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 03:34:11.928531    1442 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17102-920/.minikube/proxy-client-ca.key ...
	I0821 03:34:11.928534    1442 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17102-920/.minikube/proxy-client-ca.key: {Name:mk02c96134c44ce7714696be07e0b5c22f58dc64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 03:34:11.928684    1442 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/client.key
	I0821 03:34:11.928691    1442 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/client.crt with IP's: []
	I0821 03:34:12.116170    1442 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/client.crt ...
	I0821 03:34:12.116177    1442 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/client.crt: {Name:mk3182b685506ec2dbfcad41054e3ffc2bf0f3b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 03:34:12.116379    1442 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/client.key ...
	I0821 03:34:12.116384    1442 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/client.key: {Name:mk087ee0a568a92e1e97ae6eb06dd6604454b2e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 03:34:12.116489    1442 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/apiserver.key.96055969
	I0821 03:34:12.116499    1442 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/apiserver.crt.96055969 with IP's: [192.168.105.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0821 03:34:12.174634    1442 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/apiserver.crt.96055969 ...
	I0821 03:34:12.174637    1442 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/apiserver.crt.96055969: {Name:mk02f137a3a75334a28e6811666f6d1dde47709c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 03:34:12.174771    1442 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/apiserver.key.96055969 ...
	I0821 03:34:12.174774    1442 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/apiserver.key.96055969: {Name:mk629f60ce1370d0aadb852a255428713cef631b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 03:34:12.174873    1442 certs.go:337] copying /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/apiserver.crt.96055969 -> /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/apiserver.crt
	I0821 03:34:12.175028    1442 certs.go:341] copying /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/apiserver.key.96055969 -> /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/apiserver.key
	I0821 03:34:12.175114    1442 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/proxy-client.key
	I0821 03:34:12.175123    1442 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/proxy-client.crt with IP's: []
	I0821 03:34:12.291172    1442 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/proxy-client.crt ...
	I0821 03:34:12.291175    1442 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/proxy-client.crt: {Name:mk4861ba5de37ed8d82543663b167ed0e04664dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 03:34:12.291331    1442 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/proxy-client.key ...
	I0821 03:34:12.291334    1442 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/proxy-client.key: {Name:mk5eb1fb206858f7f6262a3b86ec8673fdeb4399 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 03:34:12.291586    1442 certs.go:437] found cert: /Users/jenkins/minikube-integration/17102-920/.minikube/certs/Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca-key.pem (1679 bytes)
	I0821 03:34:12.291611    1442 certs.go:437] found cert: /Users/jenkins/minikube-integration/17102-920/.minikube/certs/Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca.pem (1078 bytes)
	I0821 03:34:12.291633    1442 certs.go:437] found cert: /Users/jenkins/minikube-integration/17102-920/.minikube/certs/Users/jenkins/minikube-integration/17102-920/.minikube/certs/cert.pem (1123 bytes)
	I0821 03:34:12.291654    1442 certs.go:437] found cert: /Users/jenkins/minikube-integration/17102-920/.minikube/certs/Users/jenkins/minikube-integration/17102-920/.minikube/certs/key.pem (1679 bytes)
	I0821 03:34:12.292029    1442 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0821 03:34:12.300489    1442 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0821 03:34:12.307765    1442 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0821 03:34:12.314499    1442 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0821 03:34:12.321449    1442 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0821 03:34:12.328965    1442 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0821 03:34:12.336085    1442 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0821 03:34:12.342676    1442 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0821 03:34:12.349529    1442 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0821 03:34:12.356907    1442 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0821 03:34:12.363000    1442 ssh_runner.go:195] Run: openssl version
	I0821 03:34:12.364943    1442 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0821 03:34:12.368659    1442 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0821 03:34:12.370316    1442 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 21 10:34 /usr/share/ca-certificates/minikubeCA.pem
	I0821 03:34:12.370337    1442 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0821 03:34:12.372170    1442 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0821 03:34:12.375051    1442 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0821 03:34:12.376254    1442 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0821 03:34:12.376292    1442 kubeadm.go:404] StartCluster: {Name:addons-500000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:addons-500000 Namespac
e:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mount
UID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0821 03:34:12.376353    1442 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0821 03:34:12.381765    1442 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0821 03:34:12.385127    1442 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0821 03:34:12.388050    1442 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0821 03:34:12.390699    1442 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0821 03:34:12.390714    1442 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0821 03:34:12.412358    1442 kubeadm.go:322] [init] Using Kubernetes version: v1.27.4
	I0821 03:34:12.412390    1442 kubeadm.go:322] [preflight] Running pre-flight checks
	I0821 03:34:12.465080    1442 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0821 03:34:12.465135    1442 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0821 03:34:12.465183    1442 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0821 03:34:12.530098    1442 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0821 03:34:12.539343    1442 out.go:204]   - Generating certificates and keys ...
	I0821 03:34:12.539375    1442 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0821 03:34:12.539413    1442 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0821 03:34:12.639909    1442 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0821 03:34:12.680054    1442 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0821 03:34:12.714095    1442 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0821 03:34:12.849965    1442 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0821 03:34:12.996137    1442 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0821 03:34:12.996199    1442 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-500000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0821 03:34:13.141022    1442 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0821 03:34:13.141102    1442 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-500000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0821 03:34:13.228117    1442 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0821 03:34:13.409230    1442 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0821 03:34:13.774136    1442 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0821 03:34:13.774180    1442 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0821 03:34:13.866700    1442 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0821 03:34:13.977782    1442 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0821 03:34:14.068222    1442 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0821 03:34:14.144551    1442 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0821 03:34:14.151809    1442 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0821 03:34:14.152307    1442 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0821 03:34:14.152438    1442 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0821 03:34:14.228545    1442 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0821 03:34:14.232527    1442 out.go:204]   - Booting up control plane ...
	I0821 03:34:14.232575    1442 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0821 03:34:14.232614    1442 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0821 03:34:14.232645    1442 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0821 03:34:14.236440    1442 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0821 03:34:14.238376    1442 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0821 03:34:18.241227    1442 kubeadm.go:322] [apiclient] All control plane components are healthy after 4.002539 seconds
	I0821 03:34:18.241427    1442 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0821 03:34:18.252886    1442 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0821 03:34:18.774491    1442 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0821 03:34:18.774728    1442 kubeadm.go:322] [mark-control-plane] Marking the node addons-500000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0821 03:34:19.280325    1442 kubeadm.go:322] [bootstrap-token] Using token: jvxtql.8wgzhr7nb5g9o93n
	I0821 03:34:19.286479    1442 out.go:204]   - Configuring RBAC rules ...
	I0821 03:34:19.286537    1442 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0821 03:34:19.290363    1442 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0821 03:34:19.293121    1442 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0821 03:34:19.294256    1442 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0821 03:34:19.295736    1442 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0821 03:34:19.296773    1442 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0821 03:34:19.301173    1442 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0821 03:34:19.474355    1442 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0821 03:34:19.693544    1442 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0821 03:34:19.694011    1442 kubeadm.go:322] 
	I0821 03:34:19.694043    1442 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0821 03:34:19.694047    1442 kubeadm.go:322] 
	I0821 03:34:19.694084    1442 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0821 03:34:19.694086    1442 kubeadm.go:322] 
	I0821 03:34:19.694099    1442 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0821 03:34:19.694192    1442 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0821 03:34:19.694216    1442 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0821 03:34:19.694219    1442 kubeadm.go:322] 
	I0821 03:34:19.694251    1442 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0821 03:34:19.694263    1442 kubeadm.go:322] 
	I0821 03:34:19.694293    1442 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0821 03:34:19.694296    1442 kubeadm.go:322] 
	I0821 03:34:19.694320    1442 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0821 03:34:19.694360    1442 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0821 03:34:19.694390    1442 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0821 03:34:19.694394    1442 kubeadm.go:322] 
	I0821 03:34:19.694446    1442 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0821 03:34:19.694488    1442 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0821 03:34:19.694495    1442 kubeadm.go:322] 
	I0821 03:34:19.694535    1442 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token jvxtql.8wgzhr7nb5g9o93n \
	I0821 03:34:19.694617    1442 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:c361d9930575cb4141f86c9c696a425212668e350af0245a5e7de41b1bd48407 \
	I0821 03:34:19.694632    1442 kubeadm.go:322] 	--control-plane 
	I0821 03:34:19.694634    1442 kubeadm.go:322] 
	I0821 03:34:19.694684    1442 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0821 03:34:19.694688    1442 kubeadm.go:322] 
	I0821 03:34:19.694735    1442 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token jvxtql.8wgzhr7nb5g9o93n \
	I0821 03:34:19.694782    1442 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:c361d9930575cb4141f86c9c696a425212668e350af0245a5e7de41b1bd48407 
	I0821 03:34:19.694835    1442 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0821 03:34:19.694840    1442 cni.go:84] Creating CNI manager for ""
	I0821 03:34:19.694847    1442 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0821 03:34:19.703814    1442 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0821 03:34:19.707890    1442 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0821 03:34:19.711023    1442 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0821 03:34:19.716873    1442 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0821 03:34:19.716924    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:19.716951    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43 minikube.k8s.io/name=addons-500000 minikube.k8s.io/updated_at=2023_08_21T03_34_19_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:19.723924    1442 ops.go:34] apiserver oom_adj: -16
	I0821 03:34:19.767999    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:19.814902    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:20.352169    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:20.852188    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:21.352164    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:21.852123    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:22.352346    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:22.852184    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:23.352159    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:23.852279    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:24.352116    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:24.852182    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:25.352203    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:25.852083    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:26.352293    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:26.852062    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:27.352046    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:27.851991    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:28.352173    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:28.851976    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:29.352173    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:29.851943    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:30.352016    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:30.851904    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:31.351923    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:31.851905    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:32.351835    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:32.388500    1442 kubeadm.go:1081] duration metric: took 12.671972458s to wait for elevateKubeSystemPrivileges.
	I0821 03:34:32.388516    1442 kubeadm.go:406] StartCluster complete in 20.01278175s
	I0821 03:34:32.388525    1442 settings.go:142] acquiring lock: {Name:mkeb461ec3a6a92ee32ce41e8df63d6759cb2728 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 03:34:32.388680    1442 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/17102-920/kubeconfig
	I0821 03:34:32.388902    1442 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17102-920/kubeconfig: {Name:mk2bc9c64ad130c36a0253707ac2ba3f8fd22371 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 03:34:32.389107    1442 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0821 03:34:32.389147    1442 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:true]
	I0821 03:34:32.389221    1442 addons.go:69] Setting volumesnapshots=true in profile "addons-500000"
	I0821 03:34:32.389227    1442 addons.go:231] Setting addon volumesnapshots=true in "addons-500000"
	I0821 03:34:32.389225    1442 addons.go:69] Setting cloud-spanner=true in profile "addons-500000"
	I0821 03:34:32.389236    1442 addons.go:231] Setting addon cloud-spanner=true in "addons-500000"
	I0821 03:34:32.389251    1442 config.go:182] Loaded profile config "addons-500000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0821 03:34:32.389271    1442 host.go:66] Checking if "addons-500000" exists ...
	I0821 03:34:32.389279    1442 addons.go:69] Setting storage-provisioner=true in profile "addons-500000"
	I0821 03:34:32.389222    1442 addons.go:69] Setting gcp-auth=true in profile "addons-500000"
	I0821 03:34:32.389282    1442 addons.go:231] Setting addon storage-provisioner=true in "addons-500000"
	I0821 03:34:32.389288    1442 mustload.go:65] Loading cluster: addons-500000
	I0821 03:34:32.389299    1442 host.go:66] Checking if "addons-500000" exists ...
	I0821 03:34:32.389299    1442 addons.go:69] Setting inspektor-gadget=true in profile "addons-500000"
	I0821 03:34:32.389327    1442 addons.go:69] Setting registry=true in profile "addons-500000"
	I0821 03:34:32.389360    1442 config.go:182] Loaded profile config "addons-500000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0821 03:34:32.389358    1442 addons.go:69] Setting ingress-dns=true in profile "addons-500000"
	I0821 03:34:32.389378    1442 addons.go:231] Setting addon ingress-dns=true in "addons-500000"
	I0821 03:34:32.389273    1442 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-500000"
	I0821 03:34:32.389396    1442 addons.go:69] Setting ingress=true in profile "addons-500000"
	I0821 03:34:32.389434    1442 host.go:66] Checking if "addons-500000" exists ...
	I0821 03:34:32.389418    1442 addons.go:69] Setting metrics-server=true in profile "addons-500000"
	I0821 03:34:32.389454    1442 addons.go:231] Setting addon metrics-server=true in "addons-500000"
	I0821 03:34:32.389465    1442 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-500000"
	I0821 03:34:32.389506    1442 host.go:66] Checking if "addons-500000" exists ...
	I0821 03:34:32.389519    1442 host.go:66] Checking if "addons-500000" exists ...
	I0821 03:34:32.389271    1442 host.go:66] Checking if "addons-500000" exists ...
	W0821 03:34:32.389564    1442 host.go:54] host status for "addons-500000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/monitor: connect: connection refused
	W0821 03:34:32.389572    1442 addons.go:277] "addons-500000" is not running, setting storage-provisioner=true and skipping enablement (err=<nil>)
	I0821 03:34:32.389347    1442 addons.go:231] Setting addon inspektor-gadget=true in "addons-500000"
	I0821 03:34:32.389693    1442 host.go:66] Checking if "addons-500000" exists ...
	W0821 03:34:32.389757    1442 host.go:54] host status for "addons-500000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/monitor: connect: connection refused
	W0821 03:34:32.389767    1442 addons.go:277] "addons-500000" is not running, setting cloud-spanner=true and skipping enablement (err=<nil>)
	I0821 03:34:32.389367    1442 addons.go:231] Setting addon registry=true in "addons-500000"
	I0821 03:34:32.389786    1442 host.go:66] Checking if "addons-500000" exists ...
	W0821 03:34:32.389790    1442 host.go:54] host status for "addons-500000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/monitor: connect: connection refused
	W0821 03:34:32.389796    1442 addons.go:277] "addons-500000" is not running, setting metrics-server=true and skipping enablement (err=<nil>)
	I0821 03:34:32.389799    1442 addons.go:467] Verifying addon metrics-server=true in "addons-500000"
	W0821 03:34:32.389788    1442 host.go:54] host status for "addons-500000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/monitor: connect: connection refused
	W0821 03:34:32.389803    1442 addons.go:277] "addons-500000" is not running, setting csi-hostpath-driver=true and skipping enablement (err=<nil>)
	I0821 03:34:32.389805    1442 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-500000"
	I0821 03:34:32.389275    1442 addons.go:69] Setting default-storageclass=true in profile "addons-500000"
	I0821 03:34:32.394058    1442 out.go:177] * Verifying csi-hostpath-driver addon...
	I0821 03:34:32.389436    1442 addons.go:231] Setting addon ingress=true in "addons-500000"
	I0821 03:34:32.389868    1442 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-500000"
	W0821 03:34:32.389953    1442 host.go:54] host status for "addons-500000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/monitor: connect: connection refused
	W0821 03:34:32.390033    1442 host.go:54] host status for "addons-500000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/monitor: connect: connection refused
	W0821 03:34:32.390053    1442 host.go:54] host status for "addons-500000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/monitor: connect: connection refused
	I0821 03:34:32.390510    1442 host.go:66] Checking if "addons-500000" exists ...
	I0821 03:34:32.409190    1442 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	W0821 03:34:32.404296    1442 addons.go:277] "addons-500000" is not running, setting ingress-dns=true and skipping enablement (err=<nil>)
	W0821 03:34:32.404342    1442 addons.go:277] "addons-500000" is not running, setting inspektor-gadget=true and skipping enablement (err=<nil>)
	W0821 03:34:32.404346    1442 addons.go:277] "addons-500000" is not running, setting registry=true and skipping enablement (err=<nil>)
	I0821 03:34:32.404410    1442 host.go:66] Checking if "addons-500000" exists ...
	I0821 03:34:32.404764    1442 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0821 03:34:32.413218    1442 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0821 03:34:32.413224    1442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0821 03:34:32.413232    1442 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/id_rsa Username:docker}
	I0821 03:34:32.413266    1442 addons.go:467] Verifying addon registry=true in "addons-500000"
	I0821 03:34:32.418274    1442 out.go:177] * Verifying registry addon...
	I0821 03:34:32.419795    1442 addons.go:231] Setting addon default-storageclass=true in "addons-500000"
	I0821 03:34:32.419868    1442 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-500000" context rescaled to 1 replicas
	I0821 03:34:32.420817    1442 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0821 03:34:32.421498    1442 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0821 03:34:32.421694    1442 host.go:66] Checking if "addons-500000" exists ...
	I0821 03:34:32.421701    1442 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0821 03:34:32.421849    1442 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0821 03:34:32.431173    1442 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0821 03:34:32.440212    1442 out.go:177] * Verifying Kubernetes components...
	I0821 03:34:32.431974    1442 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0821 03:34:32.435186    1442 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=registry
	I0821 03:34:32.444202    1442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0821 03:34:32.444209    1442 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0821 03:34:32.447466    1442 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0821 03:34:32.448196    1442 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.8.1
	I0821 03:34:32.448211    1442 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/id_rsa Username:docker}
	I0821 03:34:32.451292    1442 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0821 03:34:32.451299    1442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16083 bytes)
	I0821 03:34:32.451306    1442 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/id_rsa Username:docker}
	I0821 03:34:32.454351    1442 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0821 03:34:32.454358    1442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0821 03:34:32.485876    1442 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0821 03:34:32.485886    1442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0821 03:34:32.513135    1442 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0821 03:34:32.513147    1442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0821 03:34:32.532036    1442 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0821 03:34:32.532052    1442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0821 03:34:32.537566    1442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0821 03:34:32.542495    1442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0821 03:34:32.548533    1442 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0821 03:34:32.548541    1442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0821 03:34:32.568087    1442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0821 03:34:33.517324    1442 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.069159875s)
	I0821 03:34:33.517338    1442 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.069147125s)
	I0821 03:34:33.517342    1442 start.go:901] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I0821 03:34:33.517808    1442 node_ready.go:35] waiting up to 6m0s for node "addons-500000" to be "Ready" ...
	I0821 03:34:33.519592    1442 node_ready.go:49] node "addons-500000" has status "Ready":"True"
	I0821 03:34:33.519599    1442 node_ready.go:38] duration metric: took 1.779708ms waiting for node "addons-500000" to be "Ready" ...
	I0821 03:34:33.519602    1442 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0821 03:34:33.522687    1442 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-97rp7" in "kube-system" namespace to be "Ready" ...
	I0821 03:34:33.964195    1442 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.421717084s)
	I0821 03:34:33.964211    1442 addons.go:467] Verifying addon ingress=true in "addons-500000"
	I0821 03:34:33.968723    1442 out.go:177] * Verifying ingress addon...
	I0821 03:34:33.964338    1442 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.396275834s)
	W0821 03:34:33.968774    1442 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0821 03:34:33.975741    1442 retry.go:31] will retry after 231.591556ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0821 03:34:33.976141    1442 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0821 03:34:33.984299    1442 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0821 03:34:33.984307    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:33.987720    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:34.207434    1442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0821 03:34:34.491123    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:34.991180    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:35.490538    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:35.534205    1442 pod_ready.go:102] pod "coredns-5d78c9869d-97rp7" in "kube-system" namespace has status "Ready":"False"
	I0821 03:34:35.990628    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:36.490998    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:36.745839    1442 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.5384555s)
	I0821 03:34:36.990793    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:37.491119    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:37.534210    1442 pod_ready.go:102] pod "coredns-5d78c9869d-97rp7" in "kube-system" namespace has status "Ready":"False"
	I0821 03:34:37.990643    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:38.490772    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:38.997287    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:39.008172    1442 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0821 03:34:39.008186    1442 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/id_rsa Username:docker}
	I0821 03:34:39.055480    1442 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0821 03:34:39.064828    1442 addons.go:231] Setting addon gcp-auth=true in "addons-500000"
	I0821 03:34:39.064858    1442 host.go:66] Checking if "addons-500000" exists ...
	I0821 03:34:39.065649    1442 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0821 03:34:39.065660    1442 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/id_rsa Username:docker}
	I0821 03:34:39.100776    1442 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0821 03:34:39.103705    1442 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0821 03:34:39.107726    1442 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0821 03:34:39.107734    1442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0821 03:34:39.113078    1442 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0821 03:34:39.113087    1442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0821 03:34:39.127541    1442 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0821 03:34:39.127551    1442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5412 bytes)
	I0821 03:34:39.133486    1442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0821 03:34:39.491109    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:39.534694    1442 pod_ready.go:102] pod "coredns-5d78c9869d-97rp7" in "kube-system" namespace has status "Ready":"False"
	I0821 03:34:39.629710    1442 addons.go:467] Verifying addon gcp-auth=true in "addons-500000"
	I0821 03:34:39.641410    1442 out.go:177] * Verifying gcp-auth addon...
	I0821 03:34:39.650441    1442 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0821 03:34:39.656554    1442 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0821 03:34:39.656563    1442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 03:34:39.658191    1442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 03:34:39.991177    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:40.161154    1442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 03:34:40.492443    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:40.660810    1442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 03:34:40.990558    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:41.161357    1442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 03:34:41.492269    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:41.534695    1442 pod_ready.go:102] pod "coredns-5d78c9869d-97rp7" in "kube-system" namespace has status "Ready":"False"
	I0821 03:34:41.660947    1442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 03:34:41.990678    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:42.161013    1442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 03:34:42.490658    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:42.660884    1442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 03:34:42.990530    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:43.161042    1442 kapi.go:107] duration metric: took 3.510698166s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0821 03:34:43.165184    1442 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-500000 cluster.
	I0821 03:34:43.169238    1442 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0821 03:34:43.173158    1442 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0821 03:34:43.491145    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:43.534713    1442 pod_ready.go:97] pod "coredns-5d78c9869d-97rp7" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-08-21 03:34:32 -0700 PDT Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-08-21 03:34:32 -0700 PDT Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-08-21 03:34:32 -0700 PDT Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-08-21 03:34:32 -0700 PDT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.105.2 HostIPs:[] PodIP: PodIPs:[] StartTime:2023-08-21 03:34:32 -0700 PDT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerSt
ateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2023-08-21 03:34:33 -0700 PDT,FinishedAt:2023-08-21 03:34:43 -0700 PDT,ContainerID:docker://d9032391cb53f0fa8cfd4e1696eef2d7eb7096ba08423fd5087bb7b4d2fba5ed,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e ContainerID:docker://d9032391cb53f0fa8cfd4e1696eef2d7eb7096ba08423fd5087bb7b4d2fba5ed Started:0x140018d39a0 AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0821 03:34:43.534727    1442 pod_ready.go:81] duration metric: took 10.012309458s waiting for pod "coredns-5d78c9869d-97rp7" in "kube-system" namespace to be "Ready" ...
	E0821 03:34:43.534732    1442 pod_ready.go:66] WaitExtra: waitPodCondition: pod "coredns-5d78c9869d-97rp7" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-08-21 03:34:32 -0700 PDT Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-08-21 03:34:32 -0700 PDT Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-08-21 03:34:32 -0700 PDT Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-08-21 03:34:32 -0700 PDT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.105.2 HostIPs:[] PodIP: PodIPs:[] StartTime:2023-08-21 03:34:32 -0700 PDT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Runnin
g:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2023-08-21 03:34:33 -0700 PDT,FinishedAt:2023-08-21 03:34:43 -0700 PDT,ContainerID:docker://d9032391cb53f0fa8cfd4e1696eef2d7eb7096ba08423fd5087bb7b4d2fba5ed,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e ContainerID:docker://d9032391cb53f0fa8cfd4e1696eef2d7eb7096ba08423fd5087bb7b4d2fba5ed Started:0x140018d39a0 AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0821 03:34:43.534736    1442 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-hbg44" in "kube-system" namespace to be "Ready" ...
	I0821 03:34:43.537136    1442 pod_ready.go:92] pod "coredns-5d78c9869d-hbg44" in "kube-system" namespace has status "Ready":"True"
	I0821 03:34:43.537140    1442 pod_ready.go:81] duration metric: took 2.400375ms waiting for pod "coredns-5d78c9869d-hbg44" in "kube-system" namespace to be "Ready" ...
	I0821 03:34:43.537145    1442 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-500000" in "kube-system" namespace to be "Ready" ...
	I0821 03:34:43.539758    1442 pod_ready.go:92] pod "etcd-addons-500000" in "kube-system" namespace has status "Ready":"True"
	I0821 03:34:43.539762    1442 pod_ready.go:81] duration metric: took 2.614916ms waiting for pod "etcd-addons-500000" in "kube-system" namespace to be "Ready" ...
	I0821 03:34:43.539766    1442 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-500000" in "kube-system" namespace to be "Ready" ...
	I0821 03:34:43.542039    1442 pod_ready.go:92] pod "kube-apiserver-addons-500000" in "kube-system" namespace has status "Ready":"True"
	I0821 03:34:43.542045    1442 pod_ready.go:81] duration metric: took 2.276584ms waiting for pod "kube-apiserver-addons-500000" in "kube-system" namespace to be "Ready" ...
	I0821 03:34:43.542049    1442 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-500000" in "kube-system" namespace to be "Ready" ...
	I0821 03:34:43.544341    1442 pod_ready.go:92] pod "kube-controller-manager-addons-500000" in "kube-system" namespace has status "Ready":"True"
	I0821 03:34:43.544345    1442 pod_ready.go:81] duration metric: took 2.2935ms waiting for pod "kube-controller-manager-addons-500000" in "kube-system" namespace to be "Ready" ...
	I0821 03:34:43.544348    1442 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-z2wj9" in "kube-system" namespace to be "Ready" ...
	I0821 03:34:43.933736    1442 pod_ready.go:92] pod "kube-proxy-z2wj9" in "kube-system" namespace has status "Ready":"True"
	I0821 03:34:43.933748    1442 pod_ready.go:81] duration metric: took 389.407375ms waiting for pod "kube-proxy-z2wj9" in "kube-system" namespace to be "Ready" ...
	I0821 03:34:43.933752    1442 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-500000" in "kube-system" namespace to be "Ready" ...
	I0821 03:34:43.990470    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:44.334535    1442 pod_ready.go:92] pod "kube-scheduler-addons-500000" in "kube-system" namespace has status "Ready":"True"
	I0821 03:34:44.334545    1442 pod_ready.go:81] duration metric: took 400.801125ms waiting for pod "kube-scheduler-addons-500000" in "kube-system" namespace to be "Ready" ...
	I0821 03:34:44.334549    1442 pod_ready.go:38] duration metric: took 10.81524225s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0821 03:34:44.334558    1442 api_server.go:52] waiting for apiserver process to appear ...
	I0821 03:34:44.334639    1442 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0821 03:34:44.339980    1442 api_server.go:72] duration metric: took 11.909098333s to wait for apiserver process to appear ...
	I0821 03:34:44.339987    1442 api_server.go:88] waiting for apiserver healthz status ...
	I0821 03:34:44.339993    1442 api_server.go:253] Checking apiserver healthz at https://192.168.105.2:8443/healthz ...
	I0821 03:34:44.344178    1442 api_server.go:279] https://192.168.105.2:8443/healthz returned 200:
	ok
	I0821 03:34:44.344920    1442 api_server.go:141] control plane version: v1.27.4
	I0821 03:34:44.344925    1442 api_server.go:131] duration metric: took 4.936ms to wait for apiserver health ...
	I0821 03:34:44.344929    1442 system_pods.go:43] waiting for kube-system pods to appear ...
	I0821 03:34:44.490452    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:44.535983    1442 system_pods.go:59] 8 kube-system pods found
	I0821 03:34:44.535991    1442 system_pods.go:61] "coredns-5d78c9869d-hbg44" [2212048e-385c-4235-ad14-1b9e4e812106] Running
	I0821 03:34:44.535994    1442 system_pods.go:61] "etcd-addons-500000" [dcde2eed-b2a3-4b2d-af51-14d42189714c] Running
	I0821 03:34:44.536011    1442 system_pods.go:61] "kube-apiserver-addons-500000" [a4c38aeb-a7ef-4239-ac34-2437f9c67d96] Running
	I0821 03:34:44.536015    1442 system_pods.go:61] "kube-controller-manager-addons-500000" [972b1e42-cd56-4f77-ad52-a1df2b79fdae] Running
	I0821 03:34:44.536018    1442 system_pods.go:61] "kube-proxy-z2wj9" [56cdd0e9-2b8f-476e-be08-a52381eecb16] Running
	I0821 03:34:44.536020    1442 system_pods.go:61] "kube-scheduler-addons-500000" [c2d2f1e5-45c6-48a9-990d-7e32d9d75976] Running
	I0821 03:34:44.536022    1442 system_pods.go:61] "snapshot-controller-75bbb956b9-4pgqh" [7452ce04-2fbb-4f7a-9e5f-87b8b577fc94] Running
	I0821 03:34:44.536025    1442 system_pods.go:61] "snapshot-controller-75bbb956b9-j9mkf" [dbd2a297-29a5-4435-8fb1-849d8ae91771] Running
	I0821 03:34:44.536028    1442 system_pods.go:74] duration metric: took 191.1015ms to wait for pod list to return data ...
	I0821 03:34:44.536033    1442 default_sa.go:34] waiting for default service account to be created ...
	I0821 03:34:44.734042    1442 default_sa.go:45] found service account: "default"
	I0821 03:34:44.734051    1442 default_sa.go:55] duration metric: took 198.020583ms for default service account to be created ...
	I0821 03:34:44.734055    1442 system_pods.go:116] waiting for k8s-apps to be running ...
	I0821 03:34:44.935348    1442 system_pods.go:86] 8 kube-system pods found
	I0821 03:34:44.935359    1442 system_pods.go:89] "coredns-5d78c9869d-hbg44" [2212048e-385c-4235-ad14-1b9e4e812106] Running
	I0821 03:34:44.935362    1442 system_pods.go:89] "etcd-addons-500000" [dcde2eed-b2a3-4b2d-af51-14d42189714c] Running
	I0821 03:34:44.935365    1442 system_pods.go:89] "kube-apiserver-addons-500000" [a4c38aeb-a7ef-4239-ac34-2437f9c67d96] Running
	I0821 03:34:44.935367    1442 system_pods.go:89] "kube-controller-manager-addons-500000" [972b1e42-cd56-4f77-ad52-a1df2b79fdae] Running
	I0821 03:34:44.935369    1442 system_pods.go:89] "kube-proxy-z2wj9" [56cdd0e9-2b8f-476e-be08-a52381eecb16] Running
	I0821 03:34:44.935372    1442 system_pods.go:89] "kube-scheduler-addons-500000" [c2d2f1e5-45c6-48a9-990d-7e32d9d75976] Running
	I0821 03:34:44.935374    1442 system_pods.go:89] "snapshot-controller-75bbb956b9-4pgqh" [7452ce04-2fbb-4f7a-9e5f-87b8b577fc94] Running
	I0821 03:34:44.935376    1442 system_pods.go:89] "snapshot-controller-75bbb956b9-j9mkf" [dbd2a297-29a5-4435-8fb1-849d8ae91771] Running
	I0821 03:34:44.935380    1442 system_pods.go:126] duration metric: took 201.327917ms to wait for k8s-apps to be running ...
	I0821 03:34:44.935391    1442 system_svc.go:44] waiting for kubelet service to be running ....
	I0821 03:34:44.935475    1442 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0821 03:34:44.941643    1442 system_svc.go:56] duration metric: took 6.252209ms WaitForService to wait for kubelet.
	I0821 03:34:44.941651    1442 kubeadm.go:581] duration metric: took 12.5107865s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0821 03:34:44.941660    1442 node_conditions.go:102] verifying NodePressure condition ...
	I0821 03:34:44.990746    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:45.134674    1442 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0821 03:34:45.134706    1442 node_conditions.go:123] node cpu capacity is 2
	I0821 03:34:45.134712    1442 node_conditions.go:105] duration metric: took 193.055083ms to run NodePressure ...
	I0821 03:34:45.134717    1442 start.go:228] waiting for startup goroutines ...
	I0821 03:34:45.490470    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:45.990643    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:46.490327    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:46.990587    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:47.490536    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:47.990358    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:48.490279    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:48.990490    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:49.490328    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:49.990414    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:50.490337    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:50.990260    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:51.490639    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:51.989843    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:52.490813    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:52.990112    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:53.491005    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:53.992627    1442 kapi.go:107] duration metric: took 20.017033875s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0821 03:40:32.405313    1442 kapi.go:107] duration metric: took 6m0.010490834s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	W0821 03:40:32.405643    1442 out.go:239] ! Enabling 'csi-hostpath-driver' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=csi-hostpath-driver pods: context deadline exceeded]
	I0821 03:40:32.421828    1442 kapi.go:107] duration metric: took 6m0.009978583s to wait for kubernetes.io/minikube-addons=registry ...
	W0821 03:40:32.421921    1442 out.go:239] ! Enabling 'registry' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=registry pods: context deadline exceeded]
	I0821 03:40:32.430174    1442 out.go:177] * Enabled addons: storage-provisioner, cloud-spanner, metrics-server, ingress-dns, inspektor-gadget, default-storageclass, volumesnapshots, gcp-auth, ingress
	I0821 03:40:32.437176    1442 addons.go:502] enable addons completed in 6m0.058033333s: enabled=[storage-provisioner cloud-spanner metrics-server ingress-dns inspektor-gadget default-storageclass volumesnapshots gcp-auth ingress]
	I0821 03:40:32.437214    1442 start.go:233] waiting for cluster config update ...
	I0821 03:40:32.437252    1442 start.go:242] writing updated cluster config ...
	I0821 03:40:32.438394    1442 ssh_runner.go:195] Run: rm -f paused
	I0821 03:40:32.505190    1442 start.go:600] kubectl: 1.27.2, cluster: 1.27.4 (minor skew: 0)
	I0821 03:40:32.509248    1442 out.go:177] * Done! kubectl is now configured to use "addons-500000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Mon 2023-08-21 10:34:00 UTC, ends at Mon 2023-08-21 11:13:48 UTC. --
	Aug 21 11:04:39 addons-500000 dockerd[1153]: time="2023-08-21T11:04:39.297059683Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 21 11:05:46 addons-500000 dockerd[1153]: time="2023-08-21T11:05:46.506202146Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 21 11:05:46 addons-500000 dockerd[1153]: time="2023-08-21T11:05:46.506265480Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 21 11:05:46 addons-500000 dockerd[1153]: time="2023-08-21T11:05:46.506288396Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 21 11:05:46 addons-500000 dockerd[1153]: time="2023-08-21T11:05:46.506297105Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 21 11:05:46 addons-500000 dockerd[1153]: time="2023-08-21T11:05:46.564056804Z" level=info msg="shim disconnected" id=73b7baa19915d562d3c78fced74c21ad47385fce38038919d957ea0a4986b5d7 namespace=moby
	Aug 21 11:05:46 addons-500000 dockerd[1148]: time="2023-08-21T11:05:46.564210054Z" level=info msg="ignoring event" container=73b7baa19915d562d3c78fced74c21ad47385fce38038919d957ea0a4986b5d7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 21 11:05:46 addons-500000 dockerd[1153]: time="2023-08-21T11:05:46.564399971Z" level=warning msg="cleaning up after shim disconnected" id=73b7baa19915d562d3c78fced74c21ad47385fce38038919d957ea0a4986b5d7 namespace=moby
	Aug 21 11:05:46 addons-500000 dockerd[1153]: time="2023-08-21T11:05:46.564413055Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 21 11:08:30 addons-500000 dockerd[1153]: time="2023-08-21T11:08:30.513348468Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 21 11:08:30 addons-500000 dockerd[1153]: time="2023-08-21T11:08:30.513415466Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 21 11:08:30 addons-500000 dockerd[1153]: time="2023-08-21T11:08:30.513434382Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 21 11:08:30 addons-500000 dockerd[1153]: time="2023-08-21T11:08:30.513446465Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 21 11:08:30 addons-500000 dockerd[1153]: time="2023-08-21T11:08:30.559905725Z" level=info msg="shim disconnected" id=ea525fc6fe39a64088142d53bb348ea2b2cff18079cf13792523934b29071bb6 namespace=moby
	Aug 21 11:08:30 addons-500000 dockerd[1153]: time="2023-08-21T11:08:30.559937599Z" level=warning msg="cleaning up after shim disconnected" id=ea525fc6fe39a64088142d53bb348ea2b2cff18079cf13792523934b29071bb6 namespace=moby
	Aug 21 11:08:30 addons-500000 dockerd[1153]: time="2023-08-21T11:08:30.559942391Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 21 11:08:30 addons-500000 dockerd[1148]: time="2023-08-21T11:08:30.560047804Z" level=info msg="ignoring event" container=ea525fc6fe39a64088142d53bb348ea2b2cff18079cf13792523934b29071bb6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 21 11:13:33 addons-500000 dockerd[1153]: time="2023-08-21T11:13:33.507206930Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 21 11:13:33 addons-500000 dockerd[1153]: time="2023-08-21T11:13:33.507591632Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 21 11:13:33 addons-500000 dockerd[1153]: time="2023-08-21T11:13:33.507665214Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 21 11:13:33 addons-500000 dockerd[1153]: time="2023-08-21T11:13:33.507710505Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 21 11:13:33 addons-500000 dockerd[1148]: time="2023-08-21T11:13:33.569656053Z" level=info msg="ignoring event" container=a5fb7a4768a72363c77080eac81b45169632cbb8318e6502276df20ef9f6df80 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 21 11:13:33 addons-500000 dockerd[1153]: time="2023-08-21T11:13:33.569736426Z" level=info msg="shim disconnected" id=a5fb7a4768a72363c77080eac81b45169632cbb8318e6502276df20ef9f6df80 namespace=moby
	Aug 21 11:13:33 addons-500000 dockerd[1153]: time="2023-08-21T11:13:33.569761926Z" level=warning msg="cleaning up after shim disconnected" id=a5fb7a4768a72363c77080eac81b45169632cbb8318e6502276df20ef9f6df80 namespace=moby
	Aug 21 11:13:33 addons-500000 dockerd[1153]: time="2023-08-21T11:13:33.569765884Z" level=info msg="cleaning up dead shim" namespace=moby
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                     CREATED             STATE               NAME                         ATTEMPT             POD ID
	a5fb7a4768a72       13753a81eccfd                                                                                                             15 seconds ago      Exited              hello-world-app              7                   a244270f71415
	77e5446fdd2e0       ghcr.io/headlamp-k8s/headlamp@sha256:498ea22dc5acadaa4015e7a50335d21fdce45d9e8f1f8adf29c2777da4182f98                     9 minutes ago       Running             headlamp                     0                   a2fdb8bd4cd8b
	12742b2537ff1       nginx@sha256:cac882be2b7305e0c8d3e3cd0575a2fd58f5fde6dd5d6299605aa0f3e67ca385                                             11 minutes ago      Running             nginx                        0                   ca7496b30bdd4
	dbe5746b118a6       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf              39 minutes ago      Running             gcp-auth                     0                   31154fc41fc35
	7979593c9bb52       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280   39 minutes ago      Running             volume-snapshot-controller   0                   70a68685a69fb
	fe9609fabef21       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280   39 minutes ago      Running             volume-snapshot-controller   0                   39eda7944d576
	16cfb4c805080       97e04611ad434                                                                                                             39 minutes ago      Running             coredns                      0                   b6fa8f87ea743
	36558206e7ebf       532e5a30e948f                                                                                                             39 minutes ago      Running             kube-proxy                   0                   ccc8633d52ca6
	bd48baf71b163       6eb63895cb67f                                                                                                             39 minutes ago      Running             kube-scheduler               0                   65c9ea48d27ae
	27dc2c0d7a4a5       24bc64e911039                                                                                                             39 minutes ago      Running             etcd                         0                   0f2cdc52bbda6
	dc949a6ce14c1       64aece92d6bde                                                                                                             39 minutes ago      Running             kube-apiserver               0                   090daa0e10080
	41982c5e9fc8f       389f6f052cf83                                                                                                             39 minutes ago      Running             kube-controller-manager      0                   a9c3d15b86bf8
	
	* 
	* ==> coredns [16cfb4c80508] <==
	* [INFO] 10.244.0.11:55380 - 15444 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000192417s
	[INFO] 10.244.0.11:55595 - 33986 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000080917s
	[INFO] 10.244.0.11:55380 - 36243 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000177876s
	[INFO] 10.244.0.11:55380 - 42834 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000146333s
	[INFO] 10.244.0.11:55595 - 5784 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00011875s
	[INFO] 10.244.0.11:55595 - 56910 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000050292s
	[INFO] 10.244.0.11:55380 - 35306 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000218333s
	[INFO] 10.244.0.11:55595 - 64077 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000055958s
	[INFO] 10.244.0.11:55595 - 56884 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000076625s
	[INFO] 10.244.0.11:55595 - 56007 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000070583s
	[INFO] 10.244.0.11:55595 - 54545 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000067333s
	[INFO] 10.244.0.11:51497 - 59355 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000398834s
	[INFO] 10.244.0.11:51497 - 38991 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000209708s
	[INFO] 10.244.0.11:51497 - 6555 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000191958s
	[INFO] 10.244.0.11:51497 - 63288 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000409876s
	[INFO] 10.244.0.11:51497 - 49529 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00012975s
	[INFO] 10.244.0.11:51497 - 3686 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000123626s
	[INFO] 10.244.0.11:51497 - 19423 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000240209s
	[INFO] 10.244.0.11:59481 - 42442 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000222709s
	[INFO] 10.244.0.11:59481 - 36904 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.0001005s
	[INFO] 10.244.0.11:59481 - 14729 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000057417s
	[INFO] 10.244.0.11:59481 - 55234 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000074708s
	[INFO] 10.244.0.11:59481 - 58225 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000045917s
	[INFO] 10.244.0.11:59481 - 23418 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00004575s
	[INFO] 10.244.0.11:59481 - 13624 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000090417s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-500000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-500000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43
	                    minikube.k8s.io/name=addons-500000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_08_21T03_34_19_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 21 Aug 2023 10:34:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-500000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 21 Aug 2023 11:13:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 21 Aug 2023 11:09:56 +0000   Mon, 21 Aug 2023 10:34:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 21 Aug 2023 11:09:56 +0000   Mon, 21 Aug 2023 10:34:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 21 Aug 2023 11:09:56 +0000   Mon, 21 Aug 2023 10:34:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 21 Aug 2023 11:09:56 +0000   Mon, 21 Aug 2023 10:34:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.2
	  Hostname:    addons-500000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	System Info:
	  Machine ID:                 0e4a1f71467c44c8a10eca186773afe2
	  System UUID:                0e4a1f71467c44c8a10eca186773afe2
	  Boot ID:                    6d5e7ffc-fb7d-41fe-b076-69fd8535d300
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.4
	  Kubelet Version:            v1.27.4
	  Kube-Proxy Version:         v1.27.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-65bdb79f98-l7sq4         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  gcp-auth                    gcp-auth-58478865f7-zcg47                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         39m
	  headlamp                    headlamp-5c78f74d8d-llcss                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m14s
	  kube-system                 coredns-5d78c9869d-hbg44                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     39m
	  kube-system                 etcd-addons-500000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         39m
	  kube-system                 kube-apiserver-addons-500000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         39m
	  kube-system                 kube-controller-manager-addons-500000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         39m
	  kube-system                 kube-proxy-z2wj9                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         39m
	  kube-system                 kube-scheduler-addons-500000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         39m
	  kube-system                 snapshot-controller-75bbb956b9-4pgqh     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         39m
	  kube-system                 snapshot-controller-75bbb956b9-j9mkf     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         39m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 39m   kube-proxy       
	  Normal  Starting                 39m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  39m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  39m   kubelet          Node addons-500000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    39m   kubelet          Node addons-500000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     39m   kubelet          Node addons-500000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                39m   kubelet          Node addons-500000 status is now: NodeReady
	  Normal  RegisteredNode           39m   node-controller  Node addons-500000 event: Registered Node addons-500000 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.490829] systemd-fstab-generator[117]: Ignoring "noauto" for root device
	[  +0.044680] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000871] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[Aug21 10:34] systemd-fstab-generator[479]: Ignoring "noauto" for root device
	[  +0.063431] systemd-fstab-generator[490]: Ignoring "noauto" for root device
	[  +0.413293] systemd-fstab-generator[750]: Ignoring "noauto" for root device
	[  +0.194883] systemd-fstab-generator[786]: Ignoring "noauto" for root device
	[  +0.079334] systemd-fstab-generator[797]: Ignoring "noauto" for root device
	[  +0.075319] systemd-fstab-generator[810]: Ignoring "noauto" for root device
	[  +1.241580] systemd-fstab-generator[968]: Ignoring "noauto" for root device
	[  +0.080868] systemd-fstab-generator[979]: Ignoring "noauto" for root device
	[  +0.070572] systemd-fstab-generator[990]: Ignoring "noauto" for root device
	[  +0.067357] systemd-fstab-generator[1001]: Ignoring "noauto" for root device
	[  +0.069942] systemd-fstab-generator[1042]: Ignoring "noauto" for root device
	[  +2.503453] systemd-fstab-generator[1141]: Ignoring "noauto" for root device
	[  +2.381640] kauditd_printk_skb: 53 callbacks suppressed
	[  +2.661766] systemd-fstab-generator[1457]: Ignoring "noauto" for root device
	[  +5.156537] systemd-fstab-generator[2350]: Ignoring "noauto" for root device
	[ +13.738428] kauditd_printk_skb: 41 callbacks suppressed
	[  +1.700338] TCP: eth0: Driver has suspect GRO implementation, TCP performance may be compromised.
	[  +4.800757] kauditd_printk_skb: 48 callbacks suppressed
	[ +14.143799] kauditd_printk_skb: 54 callbacks suppressed
	[Aug21 11:02] kauditd_printk_skb: 1 callbacks suppressed
	[Aug21 11:04] kauditd_printk_skb: 1 callbacks suppressed
	[  +5.307462] kauditd_printk_skb: 10 callbacks suppressed
	
	* 
	* ==> etcd [27dc2c0d7a4a] <==
	* {"level":"info","ts":"2023-08-21T10:34:15.992Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-08-21T10:34:16.003Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-08-21T10:34:15.992Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-08-21T10:34:16.003Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.105.2:2379"}
	{"level":"info","ts":"2023-08-21T10:34:15.992Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6e03e7863b4f9c54","local-member-id":"c46d288d2fcb0590","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-21T10:34:16.003Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-21T10:34:16.003Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-21T10:44:16.025Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":841}
	{"level":"info","ts":"2023-08-21T10:44:16.028Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":841,"took":"2.672822ms","hash":3376273956}
	{"level":"info","ts":"2023-08-21T10:44:16.028Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3376273956,"revision":841,"compact-revision":-1}
	{"level":"info","ts":"2023-08-21T10:49:16.035Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1031}
	{"level":"info","ts":"2023-08-21T10:49:16.038Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1031,"took":"1.375633ms","hash":1895539758}
	{"level":"info","ts":"2023-08-21T10:49:16.038Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1895539758,"revision":1031,"compact-revision":841}
	{"level":"info","ts":"2023-08-21T10:54:16.045Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1222}
	{"level":"info","ts":"2023-08-21T10:54:16.047Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1222,"took":"1.459351ms","hash":3279763987}
	{"level":"info","ts":"2023-08-21T10:54:16.047Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3279763987,"revision":1222,"compact-revision":1031}
	{"level":"info","ts":"2023-08-21T10:59:16.058Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1413}
	{"level":"info","ts":"2023-08-21T10:59:16.061Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1413,"took":"1.488371ms","hash":1268235317}
	{"level":"info","ts":"2023-08-21T10:59:16.061Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1268235317,"revision":1413,"compact-revision":1222}
	{"level":"info","ts":"2023-08-21T11:04:16.067Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1603}
	{"level":"info","ts":"2023-08-21T11:04:16.069Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1603,"took":"1.243127ms","hash":1670643557}
	{"level":"info","ts":"2023-08-21T11:04:16.070Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1670643557,"revision":1603,"compact-revision":1413}
	{"level":"info","ts":"2023-08-21T11:09:16.076Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1875}
	{"level":"info","ts":"2023-08-21T11:09:16.078Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1875,"took":"1.565329ms","hash":2017034248}
	{"level":"info","ts":"2023-08-21T11:09:16.078Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2017034248,"revision":1875,"compact-revision":1603}
	
	* 
	* ==> gcp-auth [dbe5746b118a] <==
	* 2023/08/21 10:34:42 GCP Auth Webhook started!
	2023/08/21 11:02:26 Ready to marshal response ...
	2023/08/21 11:02:26 Ready to write response ...
	2023/08/21 11:02:37 Ready to marshal response ...
	2023/08/21 11:02:37 Ready to write response ...
	2023/08/21 11:04:34 Ready to marshal response ...
	2023/08/21 11:04:34 Ready to write response ...
	2023/08/21 11:04:34 Ready to marshal response ...
	2023/08/21 11:04:34 Ready to write response ...
	2023/08/21 11:04:34 Ready to marshal response ...
	2023/08/21 11:04:34 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  11:13:48 up 39 min,  0 users,  load average: 0.32, 0.48, 0.44
	Linux addons-500000 5.10.57 #1 SMP PREEMPT Fri Jul 14 22:49:12 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [dc949a6ce14c] <==
	* I0821 10:54:16.765428       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0821 10:59:16.750519       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0821 10:59:16.751153       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0821 10:59:16.751904       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0821 10:59:16.752113       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0821 10:59:16.761892       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0821 10:59:16.761965       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0821 11:02:26.738684       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I0821 11:02:26.869600       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs=map[IPv4:10.111.106.162]
	I0821 11:02:37.171860       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs=map[IPv4:10.102.172.159]
	I0821 11:04:16.751175       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0821 11:04:16.751671       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0821 11:04:16.751839       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0821 11:04:16.751936       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0821 11:04:16.752119       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0821 11:04:16.752232       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0821 11:04:34.815110       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs=map[IPv4:10.104.124.111]
	E0821 11:04:35.469619       1 authentication.go:70] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	E0821 11:04:35.737559       1 authentication.go:70] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	I0821 11:09:16.751697       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0821 11:09:16.751867       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0821 11:09:16.751997       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0821 11:09:16.752078       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0821 11:09:16.752395       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0821 11:09:16.752496       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	
	* 
	* ==> kube-controller-manager [41982c5e9fc8] <==
	* I0821 11:10:46.786707       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found"
	E0821 11:11:01.787473       1 pv_controller.go:1571] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found" PVC="default/hpvc"
	I0821 11:11:01.788084       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found"
	E0821 11:11:16.787917       1 pv_controller.go:1571] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found" PVC="default/hpvc"
	I0821 11:11:16.787972       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found"
	E0821 11:11:31.788901       1 pv_controller.go:1571] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found" PVC="default/hpvc"
	I0821 11:11:31.789250       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found"
	E0821 11:11:46.789342       1 pv_controller.go:1571] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found" PVC="default/hpvc"
	I0821 11:11:46.789485       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found"
	E0821 11:12:01.790311       1 pv_controller.go:1571] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found" PVC="default/hpvc"
	I0821 11:12:01.790585       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found"
	E0821 11:12:16.791799       1 pv_controller.go:1571] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found" PVC="default/hpvc"
	I0821 11:12:16.791931       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found"
	E0821 11:12:31.792211       1 pv_controller.go:1571] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found" PVC="default/hpvc"
	I0821 11:12:31.792296       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found"
	E0821 11:12:46.793988       1 pv_controller.go:1571] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found" PVC="default/hpvc"
	I0821 11:12:46.794060       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found"
	E0821 11:13:01.794093       1 pv_controller.go:1571] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found" PVC="default/hpvc"
	I0821 11:13:01.794189       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found"
	E0821 11:13:16.794227       1 pv_controller.go:1571] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found" PVC="default/hpvc"
	I0821 11:13:16.794321       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found"
	E0821 11:13:31.795399       1 pv_controller.go:1571] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found" PVC="default/hpvc"
	I0821 11:13:31.795571       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found"
	E0821 11:13:46.795920       1 pv_controller.go:1571] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found" PVC="default/hpvc"
	I0821 11:13:46.796006       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found"
	
	* 
	* ==> kube-proxy [36558206e7eb] <==
	* I0821 10:34:32.961845       1 node.go:141] Successfully retrieved node IP: 192.168.105.2
	I0821 10:34:32.961903       1 server_others.go:110] "Detected node IP" address="192.168.105.2"
	I0821 10:34:32.961922       1 server_others.go:554] "Using iptables proxy"
	I0821 10:34:32.984111       1 server_others.go:178] "kube-proxy running in single-stack mode: secondary ipFamily is not supported" ipFamily=IPv6
	I0821 10:34:32.984124       1 server_others.go:192] "Using iptables Proxier"
	I0821 10:34:32.984147       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0821 10:34:32.984347       1 server.go:658] "Version info" version="v1.27.4"
	I0821 10:34:32.984357       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0821 10:34:32.984958       1 config.go:315] "Starting node config controller"
	I0821 10:34:32.984965       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0821 10:34:32.985291       1 config.go:188] "Starting service config controller"
	I0821 10:34:32.985295       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0821 10:34:32.985301       1 config.go:97] "Starting endpoint slice config controller"
	I0821 10:34:32.985318       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0821 10:34:33.085576       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0821 10:34:33.085604       1 shared_informer.go:318] Caches are synced for node config
	I0821 10:34:33.085608       1 shared_informer.go:318] Caches are synced for service config
	
	* 
	* ==> kube-scheduler [bd48baf71b16] <==
	* W0821 10:34:16.768490       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0821 10:34:16.768493       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0821 10:34:16.768508       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0821 10:34:16.768511       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0821 10:34:16.768562       1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0821 10:34:16.768566       1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0821 10:34:17.606010       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0821 10:34:17.606029       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0821 10:34:17.645166       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0821 10:34:17.645193       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0821 10:34:17.674598       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0821 10:34:17.674623       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0821 10:34:17.707767       1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0821 10:34:17.707781       1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0821 10:34:17.724040       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0821 10:34:17.724057       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0821 10:34:17.728085       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0821 10:34:17.728146       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0821 10:34:17.756871       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0821 10:34:17.756889       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0821 10:34:17.785527       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0821 10:34:17.785576       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0821 10:34:17.785527       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0821 10:34:17.785647       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0821 10:34:20.949364       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-08-21 10:34:00 UTC, ends at Mon 2023-08-21 11:13:49 UTC. --
	Aug 21 11:12:16 addons-500000 kubelet[2369]: E0821 11:12:16.453692    2369 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=hello-world-app pod=hello-world-app-65bdb79f98-l7sq4_default(03900f9a-54f5-4d53-8e78-2fb31aa983b5)\"" pod="default/hello-world-app-65bdb79f98-l7sq4" podUID=03900f9a-54f5-4d53-8e78-2fb31aa983b5
	Aug 21 11:12:19 addons-500000 kubelet[2369]: E0821 11:12:19.562591    2369 iptables.go:575] "Could not set up iptables canary" err=<
	Aug 21 11:12:19 addons-500000 kubelet[2369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 21 11:12:19 addons-500000 kubelet[2369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 21 11:12:19 addons-500000 kubelet[2369]:  > table=nat chain=KUBE-KUBELET-CANARY
	Aug 21 11:12:28 addons-500000 kubelet[2369]: I0821 11:12:28.452160    2369 scope.go:115] "RemoveContainer" containerID="ea525fc6fe39a64088142d53bb348ea2b2cff18079cf13792523934b29071bb6"
	Aug 21 11:12:28 addons-500000 kubelet[2369]: E0821 11:12:28.452454    2369 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=hello-world-app pod=hello-world-app-65bdb79f98-l7sq4_default(03900f9a-54f5-4d53-8e78-2fb31aa983b5)\"" pod="default/hello-world-app-65bdb79f98-l7sq4" podUID=03900f9a-54f5-4d53-8e78-2fb31aa983b5
	Aug 21 11:12:41 addons-500000 kubelet[2369]: I0821 11:12:41.457288    2369 scope.go:115] "RemoveContainer" containerID="ea525fc6fe39a64088142d53bb348ea2b2cff18079cf13792523934b29071bb6"
	Aug 21 11:12:41 addons-500000 kubelet[2369]: E0821 11:12:41.457982    2369 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=hello-world-app pod=hello-world-app-65bdb79f98-l7sq4_default(03900f9a-54f5-4d53-8e78-2fb31aa983b5)\"" pod="default/hello-world-app-65bdb79f98-l7sq4" podUID=03900f9a-54f5-4d53-8e78-2fb31aa983b5
	Aug 21 11:12:54 addons-500000 kubelet[2369]: I0821 11:12:54.453455    2369 scope.go:115] "RemoveContainer" containerID="ea525fc6fe39a64088142d53bb348ea2b2cff18079cf13792523934b29071bb6"
	Aug 21 11:12:54 addons-500000 kubelet[2369]: E0821 11:12:54.454198    2369 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=hello-world-app pod=hello-world-app-65bdb79f98-l7sq4_default(03900f9a-54f5-4d53-8e78-2fb31aa983b5)\"" pod="default/hello-world-app-65bdb79f98-l7sq4" podUID=03900f9a-54f5-4d53-8e78-2fb31aa983b5
	Aug 21 11:13:05 addons-500000 kubelet[2369]: I0821 11:13:05.460873    2369 scope.go:115] "RemoveContainer" containerID="ea525fc6fe39a64088142d53bb348ea2b2cff18079cf13792523934b29071bb6"
	Aug 21 11:13:05 addons-500000 kubelet[2369]: E0821 11:13:05.462656    2369 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=hello-world-app pod=hello-world-app-65bdb79f98-l7sq4_default(03900f9a-54f5-4d53-8e78-2fb31aa983b5)\"" pod="default/hello-world-app-65bdb79f98-l7sq4" podUID=03900f9a-54f5-4d53-8e78-2fb31aa983b5
	Aug 21 11:13:19 addons-500000 kubelet[2369]: E0821 11:13:19.562805    2369 iptables.go:575] "Could not set up iptables canary" err=<
	Aug 21 11:13:19 addons-500000 kubelet[2369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 21 11:13:19 addons-500000 kubelet[2369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 21 11:13:19 addons-500000 kubelet[2369]:  > table=nat chain=KUBE-KUBELET-CANARY
	Aug 21 11:13:20 addons-500000 kubelet[2369]: I0821 11:13:20.453446    2369 scope.go:115] "RemoveContainer" containerID="ea525fc6fe39a64088142d53bb348ea2b2cff18079cf13792523934b29071bb6"
	Aug 21 11:13:20 addons-500000 kubelet[2369]: E0821 11:13:20.454151    2369 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=hello-world-app pod=hello-world-app-65bdb79f98-l7sq4_default(03900f9a-54f5-4d53-8e78-2fb31aa983b5)\"" pod="default/hello-world-app-65bdb79f98-l7sq4" podUID=03900f9a-54f5-4d53-8e78-2fb31aa983b5
	Aug 21 11:13:33 addons-500000 kubelet[2369]: I0821 11:13:33.454658    2369 scope.go:115] "RemoveContainer" containerID="ea525fc6fe39a64088142d53bb348ea2b2cff18079cf13792523934b29071bb6"
	Aug 21 11:13:33 addons-500000 kubelet[2369]: I0821 11:13:33.941200    2369 scope.go:115] "RemoveContainer" containerID="ea525fc6fe39a64088142d53bb348ea2b2cff18079cf13792523934b29071bb6"
	Aug 21 11:13:33 addons-500000 kubelet[2369]: I0821 11:13:33.941476    2369 scope.go:115] "RemoveContainer" containerID="a5fb7a4768a72363c77080eac81b45169632cbb8318e6502276df20ef9f6df80"
	Aug 21 11:13:33 addons-500000 kubelet[2369]: E0821 11:13:33.941660    2369 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=hello-world-app pod=hello-world-app-65bdb79f98-l7sq4_default(03900f9a-54f5-4d53-8e78-2fb31aa983b5)\"" pod="default/hello-world-app-65bdb79f98-l7sq4" podUID=03900f9a-54f5-4d53-8e78-2fb31aa983b5
	Aug 21 11:13:46 addons-500000 kubelet[2369]: I0821 11:13:46.453003    2369 scope.go:115] "RemoveContainer" containerID="a5fb7a4768a72363c77080eac81b45169632cbb8318e6502276df20ef9f6df80"
	Aug 21 11:13:46 addons-500000 kubelet[2369]: E0821 11:13:46.455283    2369 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=hello-world-app pod=hello-world-app-65bdb79f98-l7sq4_default(03900f9a-54f5-4d53-8e78-2fb31aa983b5)\"" pod="default/hello-world-app-65bdb79f98-l7sq4" podUID=03900f9a-54f5-4d53-8e78-2fb31aa983b5
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p addons-500000 -n addons-500000
helpers_test.go:261: (dbg) Run:  kubectl --context addons-500000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/CSI FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/CSI (545.96s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (832.89s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:831: failed waiting for cloud-spanner-emulator deployment to stabilize: timed out waiting for the condition
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
addons_test.go:833: ***** TestAddons/parallel/CloudSpanner: pod "app=cloud-spanner-emulator" failed to start within 6m0s: context deadline exceeded ****
addons_test.go:833: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p addons-500000 -n addons-500000
addons_test.go:833: TestAddons/parallel/CloudSpanner: showing logs for failed pods as of 2023-08-21 03:52:32.625892 -0700 PDT m=+1157.652857418
addons_test.go:834: failed waiting for app=cloud-spanner-emulator pod: app=cloud-spanner-emulator within 6m0s: context deadline exceeded
addons_test.go:836: (dbg) Run:  out/minikube-darwin-arm64 addons disable cloud-spanner -p addons-500000
addons_test.go:836: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable cloud-spanner -p addons-500000: exit status 10 (1m52.004904958s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE: disable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl delete --force --ignore-not-found -f /etc/kubernetes/addons/deployment.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: the path "/etc/kubernetes/addons/deployment.yaml" does not exist
	]
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:837: failed to disable cloud-spanner addon: args "out/minikube-darwin-arm64 addons disable cloud-spanner -p addons-500000" : exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p addons-500000 -n addons-500000
helpers_test.go:244: <<< TestAddons/parallel/CloudSpanner FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/CloudSpanner]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p addons-500000 logs -n 25
helpers_test.go:252: TestAddons/parallel/CloudSpanner logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-670000 | jenkins | v1.31.2 | 21 Aug 23 03:33 PDT |                     |
	|         | -p download-only-670000           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=qemu2                    |                      |         |         |                     |                     |
	| start   | -o=json --download-only           | download-only-670000 | jenkins | v1.31.2 | 21 Aug 23 03:33 PDT |                     |
	|         | -p download-only-670000           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.4      |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=qemu2                    |                      |         |         |                     |                     |
	| start   | -o=json --download-only           | download-only-670000 | jenkins | v1.31.2 | 21 Aug 23 03:33 PDT |                     |
	|         | -p download-only-670000           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.0-rc.1 |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=qemu2                    |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.31.2 | 21 Aug 23 03:33 PDT | 21 Aug 23 03:33 PDT |
	| delete  | -p download-only-670000           | download-only-670000 | jenkins | v1.31.2 | 21 Aug 23 03:33 PDT | 21 Aug 23 03:33 PDT |
	| delete  | -p download-only-670000           | download-only-670000 | jenkins | v1.31.2 | 21 Aug 23 03:33 PDT | 21 Aug 23 03:33 PDT |
	| start   | --download-only -p                | binary-mirror-462000 | jenkins | v1.31.2 | 21 Aug 23 03:33 PDT |                     |
	|         | binary-mirror-462000              |                      |         |         |                     |                     |
	|         | --alsologtostderr                 |                      |         |         |                     |                     |
	|         | --binary-mirror                   |                      |         |         |                     |                     |
	|         | http://127.0.0.1:49329            |                      |         |         |                     |                     |
	|         | --driver=qemu2                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-462000           | binary-mirror-462000 | jenkins | v1.31.2 | 21 Aug 23 03:33 PDT | 21 Aug 23 03:33 PDT |
	| start   | -p addons-500000                  | addons-500000        | jenkins | v1.31.2 | 21 Aug 23 03:33 PDT | 21 Aug 23 03:40 PDT |
	|         | --wait=true --memory=4000         |                      |         |         |                     |                     |
	|         | --alsologtostderr                 |                      |         |         |                     |                     |
	|         | --addons=registry                 |                      |         |         |                     |                     |
	|         | --addons=metrics-server           |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots          |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver      |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                 |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner            |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget         |                      |         |         |                     |                     |
	|         | --driver=qemu2                    |                      |         |         |                     |                     |
	|         | --addons=ingress                  |                      |         |         |                     |                     |
	|         | --addons=ingress-dns              |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p          | addons-500000        | jenkins | v1.31.2 | 21 Aug 23 03:52 PDT |                     |
	|         | addons-500000                     |                      |         |         |                     |                     |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/21 03:33:48
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.20.7 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0821 03:33:48.415064    1442 out.go:296] Setting OutFile to fd 1 ...
	I0821 03:33:48.415176    1442 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 03:33:48.415179    1442 out.go:309] Setting ErrFile to fd 2...
	I0821 03:33:48.415182    1442 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 03:33:48.415284    1442 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17102-920/.minikube/bin
	I0821 03:33:48.416485    1442 out.go:303] Setting JSON to false
	I0821 03:33:48.431675    1442 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":202,"bootTime":1692613826,"procs":392,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0821 03:33:48.431757    1442 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0821 03:33:48.436776    1442 out.go:177] * [addons-500000] minikube v1.31.2 on Darwin 13.5 (arm64)
	I0821 03:33:48.443786    1442 out.go:177]   - MINIKUBE_LOCATION=17102
	I0821 03:33:48.443817    1442 notify.go:220] Checking for updates...
	I0821 03:33:48.452754    1442 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17102-920/kubeconfig
	I0821 03:33:48.459793    1442 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0821 03:33:48.466761    1442 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0821 03:33:48.469754    1442 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17102-920/.minikube
	I0821 03:33:48.472801    1442 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0821 03:33:48.476845    1442 driver.go:373] Setting default libvirt URI to qemu:///system
	I0821 03:33:48.479685    1442 out.go:177] * Using the qemu2 driver based on user configuration
	I0821 03:33:48.486794    1442 start.go:298] selected driver: qemu2
	I0821 03:33:48.486801    1442 start.go:902] validating driver "qemu2" against <nil>
	I0821 03:33:48.486809    1442 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0821 03:33:48.488928    1442 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0821 03:33:48.491687    1442 out.go:177] * Automatically selected the socket_vmnet network
	I0821 03:33:48.495787    1442 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0821 03:33:48.495806    1442 cni.go:84] Creating CNI manager for ""
	I0821 03:33:48.495814    1442 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0821 03:33:48.495818    1442 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0821 03:33:48.495823    1442 start_flags.go:319] config:
	{Name:addons-500000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:addons-500000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:c
ni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0821 03:33:48.500226    1442 iso.go:125] acquiring lock: {Name:mk813ea611542195bb0511881888be3fabc72ff6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 03:33:48.506762    1442 out.go:177] * Starting control plane node addons-500000 in cluster addons-500000
	I0821 03:33:48.510761    1442 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime docker
	I0821 03:33:48.510781    1442 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17102-920/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4
	I0821 03:33:48.510799    1442 cache.go:57] Caching tarball of preloaded images
	I0821 03:33:48.510861    1442 preload.go:174] Found /Users/jenkins/minikube-integration/17102-920/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0821 03:33:48.510867    1442 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.4 on docker
	I0821 03:33:48.511057    1442 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/config.json ...
	I0821 03:33:48.511069    1442 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/config.json: {Name:mke6ea6a330608889e821054234e4dab41e05376 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 03:33:48.511283    1442 start.go:365] acquiring machines lock for addons-500000: {Name:mk9b32d9fe994be32d77812db464b2cfa7bfb400 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0821 03:33:48.511397    1442 start.go:369] acquired machines lock for "addons-500000" in 109.25µs
	I0821 03:33:48.511409    1442 start.go:93] Provisioning new machine with config: &{Name:addons-500000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:
addons-500000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0821 03:33:48.511444    1442 start.go:125] createHost starting for "" (driver="qemu2")
	I0821 03:33:48.515777    1442 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0821 03:33:48.825711    1442 start.go:159] libmachine.API.Create for "addons-500000" (driver="qemu2")
	I0821 03:33:48.825759    1442 client.go:168] LocalClient.Create starting
	I0821 03:33:48.825907    1442 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca.pem
	I0821 03:33:48.926786    1442 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/17102-920/.minikube/certs/cert.pem
	I0821 03:33:49.005435    1442 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17102-920/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17102-920/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0821 03:33:49.429478    1442 main.go:141] libmachine: Creating SSH key...
	I0821 03:33:49.603069    1442 main.go:141] libmachine: Creating Disk image...
	I0821 03:33:49.603078    1442 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0821 03:33:49.603290    1442 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/disk.qcow2.raw /Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/disk.qcow2
	I0821 03:33:49.637224    1442 main.go:141] libmachine: STDOUT: 
	I0821 03:33:49.637249    1442 main.go:141] libmachine: STDERR: 
	I0821 03:33:49.637377    1442 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/disk.qcow2 +20000M
	I0821 03:33:49.644766    1442 main.go:141] libmachine: STDOUT: Image resized.
	
	I0821 03:33:49.644778    1442 main.go:141] libmachine: STDERR: 
	I0821 03:33:49.644801    1442 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/disk.qcow2
	I0821 03:33:49.644808    1442 main.go:141] libmachine: Starting QEMU VM...
	I0821 03:33:49.644850    1442 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:15:38:20:81:6d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/disk.qcow2
	I0821 03:33:49.712858    1442 main.go:141] libmachine: STDOUT: 
	I0821 03:33:49.712896    1442 main.go:141] libmachine: STDERR: 
	I0821 03:33:49.712900    1442 main.go:141] libmachine: Attempt 0
	I0821 03:33:49.712923    1442 main.go:141] libmachine: Searching for 5e:15:38:20:81:6d in /var/db/dhcpd_leases ...
	I0821 03:33:51.714037    1442 main.go:141] libmachine: Attempt 1
	I0821 03:33:51.714122    1442 main.go:141] libmachine: Searching for 5e:15:38:20:81:6d in /var/db/dhcpd_leases ...
	I0821 03:33:53.715339    1442 main.go:141] libmachine: Attempt 2
	I0821 03:33:53.715370    1442 main.go:141] libmachine: Searching for 5e:15:38:20:81:6d in /var/db/dhcpd_leases ...
	I0821 03:33:55.716394    1442 main.go:141] libmachine: Attempt 3
	I0821 03:33:55.716406    1442 main.go:141] libmachine: Searching for 5e:15:38:20:81:6d in /var/db/dhcpd_leases ...
	I0821 03:33:57.717443    1442 main.go:141] libmachine: Attempt 4
	I0821 03:33:57.717472    1442 main.go:141] libmachine: Searching for 5e:15:38:20:81:6d in /var/db/dhcpd_leases ...
	I0821 03:33:59.718558    1442 main.go:141] libmachine: Attempt 5
	I0821 03:33:59.718579    1442 main.go:141] libmachine: Searching for 5e:15:38:20:81:6d in /var/db/dhcpd_leases ...
	I0821 03:34:01.719634    1442 main.go:141] libmachine: Attempt 6
	I0821 03:34:01.719657    1442 main.go:141] libmachine: Searching for 5e:15:38:20:81:6d in /var/db/dhcpd_leases ...
	I0821 03:34:01.719810    1442 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0821 03:34:01.719849    1442 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:5e:15:38:20:81:6d ID:1,5e:15:38:20:81:6d Lease:0x64e48f18}
	I0821 03:34:01.719855    1442 main.go:141] libmachine: Found match: 5e:15:38:20:81:6d
	I0821 03:34:01.719867    1442 main.go:141] libmachine: IP: 192.168.105.2
	I0821 03:34:01.719873    1442 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.2)...
	I0821 03:34:03.738025    1442 machine.go:88] provisioning docker machine ...
	I0821 03:34:03.738086    1442 buildroot.go:166] provisioning hostname "addons-500000"
	I0821 03:34:03.739549    1442 main.go:141] libmachine: Using SSH client type: native
	I0821 03:34:03.740347    1442 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102aae1e0] 0x102ab0c40 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0821 03:34:03.740367    1442 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-500000 && echo "addons-500000" | sudo tee /etc/hostname
	I0821 03:34:03.826570    1442 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-500000
	
	I0821 03:34:03.826696    1442 main.go:141] libmachine: Using SSH client type: native
	I0821 03:34:03.827174    1442 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102aae1e0] 0x102ab0c40 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0821 03:34:03.827189    1442 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-500000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-500000/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-500000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0821 03:34:03.891757    1442 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0821 03:34:03.891772    1442 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17102-920/.minikube CaCertPath:/Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17102-920/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17102-920/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17102-920/.minikube}
	I0821 03:34:03.891782    1442 buildroot.go:174] setting up certificates
	I0821 03:34:03.891796    1442 provision.go:83] configureAuth start
	I0821 03:34:03.891801    1442 provision.go:138] copyHostCerts
	I0821 03:34:03.891982    1442 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17102-920/.minikube/ca.pem (1078 bytes)
	I0821 03:34:03.892356    1442 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17102-920/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17102-920/.minikube/cert.pem (1123 bytes)
	I0821 03:34:03.892494    1442 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17102-920/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17102-920/.minikube/key.pem (1679 bytes)
	I0821 03:34:03.892606    1442 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17102-920/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca-key.pem org=jenkins.addons-500000 san=[192.168.105.2 192.168.105.2 localhost 127.0.0.1 minikube addons-500000]
	I0821 03:34:04.055231    1442 provision.go:172] copyRemoteCerts
	I0821 03:34:04.055290    1442 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0821 03:34:04.055299    1442 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/id_rsa Username:docker}
	I0821 03:34:04.085022    1442 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0821 03:34:04.091757    1442 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0821 03:34:04.098302    1442 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0821 03:34:04.105297    1442 provision.go:86] duration metric: configureAuth took 213.489792ms
	I0821 03:34:04.105304    1442 buildroot.go:189] setting minikube options for container-runtime
	I0821 03:34:04.105410    1442 config.go:182] Loaded profile config "addons-500000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0821 03:34:04.105443    1442 main.go:141] libmachine: Using SSH client type: native
	I0821 03:34:04.105658    1442 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102aae1e0] 0x102ab0c40 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0821 03:34:04.105665    1442 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0821 03:34:04.160033    1442 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0821 03:34:04.160039    1442 buildroot.go:70] root file system type: tmpfs
	I0821 03:34:04.160095    1442 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0821 03:34:04.160145    1442 main.go:141] libmachine: Using SSH client type: native
	I0821 03:34:04.160376    1442 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102aae1e0] 0x102ab0c40 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0821 03:34:04.160410    1442 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0821 03:34:04.217511    1442 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0821 03:34:04.217555    1442 main.go:141] libmachine: Using SSH client type: native
	I0821 03:34:04.217777    1442 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102aae1e0] 0x102ab0c40 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0821 03:34:04.217788    1442 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0821 03:34:04.516566    1442 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0821 03:34:04.516576    1442 machine.go:91] provisioned docker machine in 778.543875ms
	I0821 03:34:04.516581    1442 client.go:171] LocalClient.Create took 15.691254833s
	I0821 03:34:04.516600    1442 start.go:167] duration metric: libmachine.API.Create for "addons-500000" took 15.691329875s
	I0821 03:34:04.516605    1442 start.go:300] post-start starting for "addons-500000" (driver="qemu2")
	I0821 03:34:04.516610    1442 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0821 03:34:04.516676    1442 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0821 03:34:04.516684    1442 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/id_rsa Username:docker}
	I0821 03:34:04.547645    1442 ssh_runner.go:195] Run: cat /etc/os-release
	I0821 03:34:04.548977    1442 info.go:137] Remote host: Buildroot 2021.02.12
	I0821 03:34:04.548988    1442 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17102-920/.minikube/addons for local assets ...
	I0821 03:34:04.549067    1442 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17102-920/.minikube/files for local assets ...
	I0821 03:34:04.549094    1442 start.go:303] post-start completed in 32.487208ms
	I0821 03:34:04.549503    1442 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/config.json ...
	I0821 03:34:04.549671    1442 start.go:128] duration metric: createHost completed in 16.038665083s
	I0821 03:34:04.549713    1442 main.go:141] libmachine: Using SSH client type: native
	I0821 03:34:04.549937    1442 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102aae1e0] 0x102ab0c40 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0821 03:34:04.549942    1442 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0821 03:34:04.603319    1442 main.go:141] libmachine: SSH cmd err, output: <nil>: 1692614044.503149419
	
	I0821 03:34:04.603325    1442 fix.go:206] guest clock: 1692614044.503149419
	I0821 03:34:04.603329    1442 fix.go:219] Guest: 2023-08-21 03:34:04.503149419 -0700 PDT Remote: 2023-08-21 03:34:04.549674 -0700 PDT m=+16.153755168 (delta=-46.524581ms)
	I0821 03:34:04.603340    1442 fix.go:190] guest clock delta is within tolerance: -46.524581ms
	I0821 03:34:04.603349    1442 start.go:83] releasing machines lock for "addons-500000", held for 16.092394834s
	I0821 03:34:04.603625    1442 ssh_runner.go:195] Run: cat /version.json
	I0821 03:34:04.603635    1442 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/id_rsa Username:docker}
	I0821 03:34:04.603639    1442 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0821 03:34:04.603685    1442 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/id_rsa Username:docker}
	I0821 03:34:04.631400    1442 ssh_runner.go:195] Run: systemctl --version
	I0821 03:34:04.633303    1442 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0821 03:34:04.675003    1442 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0821 03:34:04.675044    1442 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0821 03:34:04.680093    1442 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0821 03:34:04.680102    1442 start.go:466] detecting cgroup driver to use...
	I0821 03:34:04.680217    1442 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0821 03:34:04.685575    1442 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0821 03:34:04.689003    1442 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0821 03:34:04.692463    1442 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0821 03:34:04.692496    1442 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0821 03:34:04.695492    1442 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0821 03:34:04.698438    1442 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0821 03:34:04.701779    1442 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0821 03:34:04.705308    1442 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0821 03:34:04.708997    1442 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0821 03:34:04.712485    1442 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0821 03:34:04.715157    1442 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0821 03:34:04.718062    1442 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0821 03:34:04.801182    1442 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0821 03:34:04.809752    1442 start.go:466] detecting cgroup driver to use...
	I0821 03:34:04.809829    1442 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0821 03:34:04.815491    1442 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0821 03:34:04.820439    1442 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0821 03:34:04.826330    1442 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0821 03:34:04.831197    1442 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0821 03:34:04.835955    1442 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0821 03:34:04.893707    1442 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0821 03:34:04.899704    1442 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0821 03:34:04.905738    1442 ssh_runner.go:195] Run: which cri-dockerd
	I0821 03:34:04.907314    1442 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0821 03:34:04.910018    1442 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0821 03:34:04.915159    1442 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0821 03:34:04.993497    1442 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0821 03:34:05.073322    1442 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0821 03:34:05.073337    1442 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0821 03:34:05.078736    1442 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0821 03:34:05.148942    1442 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0821 03:34:06.310888    1442 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.161962625s)
	I0821 03:34:06.310946    1442 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0821 03:34:06.389910    1442 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0821 03:34:06.470512    1442 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0821 03:34:06.540771    1442 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0821 03:34:06.608028    1442 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0821 03:34:06.614951    1442 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0821 03:34:06.680856    1442 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0821 03:34:06.705016    1442 start.go:513] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0821 03:34:06.705100    1442 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0821 03:34:06.707492    1442 start.go:534] Will wait 60s for crictl version
	I0821 03:34:06.707526    1442 ssh_runner.go:195] Run: which crictl
	I0821 03:34:06.708906    1442 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0821 03:34:06.723485    1442 start.go:550] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.4
	RuntimeApiVersion:  v1alpha2
	I0821 03:34:06.723553    1442 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0821 03:34:06.733136    1442 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0821 03:34:06.752243    1442 out.go:204] * Preparing Kubernetes v1.27.4 on Docker 24.0.4 ...
	I0821 03:34:06.752395    1442 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0821 03:34:06.753728    1442 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0821 03:34:06.757671    1442 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime docker
	I0821 03:34:06.757717    1442 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0821 03:34:06.767699    1442 docker.go:636] Got preloaded images: 
	I0821 03:34:06.767706    1442 docker.go:642] registry.k8s.io/kube-apiserver:v1.27.4 wasn't preloaded
	I0821 03:34:06.767758    1442 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0821 03:34:06.770623    1442 ssh_runner.go:195] Run: which lz4
	I0821 03:34:06.772016    1442 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0821 03:34:06.773407    1442 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0821 03:34:06.773426    1442 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (343658271 bytes)
	I0821 03:34:08.065715    1442 docker.go:600] Took 1.293779 seconds to copy over tarball
	I0821 03:34:08.065776    1442 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0821 03:34:09.083194    1442 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.017432542s)
	I0821 03:34:09.083208    1442 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0821 03:34:09.098174    1442 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0821 03:34:09.101758    1442 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0821 03:34:09.107271    1442 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0821 03:34:09.185186    1442 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0821 03:34:11.583398    1442 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.398262792s)
	I0821 03:34:11.583497    1442 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0821 03:34:11.599112    1442 docker.go:636] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.4
	registry.k8s.io/kube-controller-manager:v1.27.4
	registry.k8s.io/kube-scheduler:v1.27.4
	registry.k8s.io/kube-proxy:v1.27.4
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0821 03:34:11.599121    1442 cache_images.go:84] Images are preloaded, skipping loading
	I0821 03:34:11.599173    1442 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0821 03:34:11.606813    1442 cni.go:84] Creating CNI manager for ""
	I0821 03:34:11.606822    1442 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0821 03:34:11.606852    1442 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0821 03:34:11.606862    1442 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.2 APIServerPort:8443 KubernetesVersion:v1.27.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-500000 NodeName:addons-500000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0821 03:34:11.606930    1442 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-500000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0821 03:34:11.606959    1442 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=addons-500000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.4 ClusterName:addons-500000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0821 03:34:11.607013    1442 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.4
	I0821 03:34:11.609958    1442 binaries.go:44] Found k8s binaries, skipping transfer
	I0821 03:34:11.609992    1442 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0821 03:34:11.613080    1442 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I0821 03:34:11.618135    1442 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0821 03:34:11.623217    1442 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2099 bytes)
	I0821 03:34:11.628067    1442 ssh_runner.go:195] Run: grep 192.168.105.2	control-plane.minikube.internal$ /etc/hosts
	I0821 03:34:11.629338    1442 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0821 03:34:11.633264    1442 certs.go:56] Setting up /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000 for IP: 192.168.105.2
	I0821 03:34:11.633272    1442 certs.go:190] acquiring lock for shared ca certs: {Name:mkaf8bee91c9bef113528e728629bac5c142d5d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 03:34:11.633419    1442 certs.go:204] generating minikubeCA CA: /Users/jenkins/minikube-integration/17102-920/.minikube/ca.key
	I0821 03:34:11.709497    1442 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17102-920/.minikube/ca.crt ...
	I0821 03:34:11.709504    1442 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17102-920/.minikube/ca.crt: {Name:mk11304afc04d282dffa1bbfafecb7763b86f0d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 03:34:11.709741    1442 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17102-920/.minikube/ca.key ...
	I0821 03:34:11.709747    1442 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17102-920/.minikube/ca.key: {Name:mk7632addcfceaabe09bce428c8dd59051132a6a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 03:34:11.709856    1442 certs.go:204] generating proxyClientCA CA: /Users/jenkins/minikube-integration/17102-920/.minikube/proxy-client-ca.key
	I0821 03:34:11.928292    1442 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17102-920/.minikube/proxy-client-ca.crt ...
	I0821 03:34:11.928298    1442 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17102-920/.minikube/proxy-client-ca.crt: {Name:mk59ba2d6f1e462ee2e456d21a76e6acaba82b70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 03:34:11.928531    1442 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17102-920/.minikube/proxy-client-ca.key ...
	I0821 03:34:11.928534    1442 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17102-920/.minikube/proxy-client-ca.key: {Name:mk02c96134c44ce7714696be07e0b5c22f58dc64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 03:34:11.928684    1442 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/client.key
	I0821 03:34:11.928691    1442 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/client.crt with IP's: []
	I0821 03:34:12.116170    1442 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/client.crt ...
	I0821 03:34:12.116177    1442 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/client.crt: {Name:mk3182b685506ec2dbfcad41054e3ffc2bf0f3b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 03:34:12.116379    1442 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/client.key ...
	I0821 03:34:12.116384    1442 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/client.key: {Name:mk087ee0a568a92e1e97ae6eb06dd6604454b2e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 03:34:12.116489    1442 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/apiserver.key.96055969
	I0821 03:34:12.116499    1442 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/apiserver.crt.96055969 with IP's: [192.168.105.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0821 03:34:12.174634    1442 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/apiserver.crt.96055969 ...
	I0821 03:34:12.174637    1442 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/apiserver.crt.96055969: {Name:mk02f137a3a75334a28e6811666f6d1dde47709c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 03:34:12.174771    1442 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/apiserver.key.96055969 ...
	I0821 03:34:12.174774    1442 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/apiserver.key.96055969: {Name:mk629f60ce1370d0aadb852a255428713cef631b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 03:34:12.174873    1442 certs.go:337] copying /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/apiserver.crt.96055969 -> /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/apiserver.crt
	I0821 03:34:12.175028    1442 certs.go:341] copying /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/apiserver.key.96055969 -> /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/apiserver.key
	I0821 03:34:12.175114    1442 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/proxy-client.key
	I0821 03:34:12.175123    1442 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/proxy-client.crt with IP's: []
	I0821 03:34:12.291172    1442 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/proxy-client.crt ...
	I0821 03:34:12.291175    1442 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/proxy-client.crt: {Name:mk4861ba5de37ed8d82543663b167ed0e04664dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 03:34:12.291331    1442 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/proxy-client.key ...
	I0821 03:34:12.291334    1442 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/proxy-client.key: {Name:mk5eb1fb206858f7f6262a3b86ec8673fdeb4399 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 03:34:12.291586    1442 certs.go:437] found cert: /Users/jenkins/minikube-integration/17102-920/.minikube/certs/Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca-key.pem (1679 bytes)
	I0821 03:34:12.291611    1442 certs.go:437] found cert: /Users/jenkins/minikube-integration/17102-920/.minikube/certs/Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca.pem (1078 bytes)
	I0821 03:34:12.291633    1442 certs.go:437] found cert: /Users/jenkins/minikube-integration/17102-920/.minikube/certs/Users/jenkins/minikube-integration/17102-920/.minikube/certs/cert.pem (1123 bytes)
	I0821 03:34:12.291654    1442 certs.go:437] found cert: /Users/jenkins/minikube-integration/17102-920/.minikube/certs/Users/jenkins/minikube-integration/17102-920/.minikube/certs/key.pem (1679 bytes)
	I0821 03:34:12.292029    1442 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0821 03:34:12.300489    1442 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0821 03:34:12.307765    1442 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0821 03:34:12.314499    1442 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0821 03:34:12.321449    1442 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0821 03:34:12.328965    1442 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0821 03:34:12.336085    1442 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0821 03:34:12.342676    1442 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0821 03:34:12.349529    1442 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0821 03:34:12.356907    1442 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0821 03:34:12.363000    1442 ssh_runner.go:195] Run: openssl version
	I0821 03:34:12.364943    1442 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0821 03:34:12.368659    1442 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0821 03:34:12.370316    1442 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 21 10:34 /usr/share/ca-certificates/minikubeCA.pem
	I0821 03:34:12.370337    1442 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0821 03:34:12.372170    1442 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0821 03:34:12.375051    1442 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0821 03:34:12.376254    1442 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0821 03:34:12.376292    1442 kubeadm.go:404] StartCluster: {Name:addons-500000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:addons-500000 Namespac
e:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mount
UID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0821 03:34:12.376353    1442 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0821 03:34:12.381765    1442 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0821 03:34:12.385127    1442 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0821 03:34:12.388050    1442 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0821 03:34:12.390699    1442 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0821 03:34:12.390714    1442 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0821 03:34:12.412358    1442 kubeadm.go:322] [init] Using Kubernetes version: v1.27.4
	I0821 03:34:12.412390    1442 kubeadm.go:322] [preflight] Running pre-flight checks
	I0821 03:34:12.465080    1442 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0821 03:34:12.465135    1442 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0821 03:34:12.465183    1442 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0821 03:34:12.530098    1442 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0821 03:34:12.539343    1442 out.go:204]   - Generating certificates and keys ...
	I0821 03:34:12.539375    1442 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0821 03:34:12.539413    1442 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0821 03:34:12.639909    1442 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0821 03:34:12.680054    1442 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0821 03:34:12.714095    1442 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0821 03:34:12.849965    1442 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0821 03:34:12.996137    1442 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0821 03:34:12.996199    1442 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-500000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0821 03:34:13.141022    1442 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0821 03:34:13.141102    1442 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-500000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0821 03:34:13.228117    1442 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0821 03:34:13.409230    1442 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0821 03:34:13.774136    1442 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0821 03:34:13.774180    1442 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0821 03:34:13.866700    1442 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0821 03:34:13.977782    1442 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0821 03:34:14.068222    1442 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0821 03:34:14.144551    1442 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0821 03:34:14.151809    1442 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0821 03:34:14.152307    1442 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0821 03:34:14.152438    1442 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0821 03:34:14.228545    1442 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0821 03:34:14.232527    1442 out.go:204]   - Booting up control plane ...
	I0821 03:34:14.232575    1442 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0821 03:34:14.232614    1442 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0821 03:34:14.232645    1442 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0821 03:34:14.236440    1442 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0821 03:34:14.238376    1442 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0821 03:34:18.241227    1442 kubeadm.go:322] [apiclient] All control plane components are healthy after 4.002539 seconds
	I0821 03:34:18.241427    1442 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0821 03:34:18.252886    1442 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0821 03:34:18.774491    1442 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0821 03:34:18.774728    1442 kubeadm.go:322] [mark-control-plane] Marking the node addons-500000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0821 03:34:19.280325    1442 kubeadm.go:322] [bootstrap-token] Using token: jvxtql.8wgzhr7nb5g9o93n
	I0821 03:34:19.286479    1442 out.go:204]   - Configuring RBAC rules ...
	I0821 03:34:19.286537    1442 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0821 03:34:19.290363    1442 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0821 03:34:19.293121    1442 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0821 03:34:19.294256    1442 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0821 03:34:19.295736    1442 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0821 03:34:19.296773    1442 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0821 03:34:19.301173    1442 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0821 03:34:19.474355    1442 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0821 03:34:19.693544    1442 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0821 03:34:19.694011    1442 kubeadm.go:322] 
	I0821 03:34:19.694043    1442 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0821 03:34:19.694047    1442 kubeadm.go:322] 
	I0821 03:34:19.694084    1442 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0821 03:34:19.694086    1442 kubeadm.go:322] 
	I0821 03:34:19.694099    1442 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0821 03:34:19.694192    1442 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0821 03:34:19.694216    1442 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0821 03:34:19.694219    1442 kubeadm.go:322] 
	I0821 03:34:19.694251    1442 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0821 03:34:19.694263    1442 kubeadm.go:322] 
	I0821 03:34:19.694293    1442 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0821 03:34:19.694296    1442 kubeadm.go:322] 
	I0821 03:34:19.694320    1442 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0821 03:34:19.694360    1442 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0821 03:34:19.694390    1442 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0821 03:34:19.694394    1442 kubeadm.go:322] 
	I0821 03:34:19.694446    1442 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0821 03:34:19.694488    1442 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0821 03:34:19.694495    1442 kubeadm.go:322] 
	I0821 03:34:19.694535    1442 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token jvxtql.8wgzhr7nb5g9o93n \
	I0821 03:34:19.694617    1442 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:c361d9930575cb4141f86c9c696a425212668e350af0245a5e7de41b1bd48407 \
	I0821 03:34:19.694632    1442 kubeadm.go:322] 	--control-plane 
	I0821 03:34:19.694634    1442 kubeadm.go:322] 
	I0821 03:34:19.694684    1442 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0821 03:34:19.694688    1442 kubeadm.go:322] 
	I0821 03:34:19.694735    1442 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token jvxtql.8wgzhr7nb5g9o93n \
	I0821 03:34:19.694782    1442 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:c361d9930575cb4141f86c9c696a425212668e350af0245a5e7de41b1bd48407 
	I0821 03:34:19.694835    1442 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0821 03:34:19.694840    1442 cni.go:84] Creating CNI manager for ""
	I0821 03:34:19.694847    1442 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0821 03:34:19.703814    1442 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0821 03:34:19.707890    1442 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0821 03:34:19.711023    1442 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0821 03:34:19.716873    1442 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0821 03:34:19.716924    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:19.716951    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43 minikube.k8s.io/name=addons-500000 minikube.k8s.io/updated_at=2023_08_21T03_34_19_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:19.723924    1442 ops.go:34] apiserver oom_adj: -16
	I0821 03:34:19.767999    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:19.814902    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:20.352169    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:20.852188    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:21.352164    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:21.852123    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:22.352346    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:22.852184    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:23.352159    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:23.852279    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:24.352116    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:24.852182    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:25.352203    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:25.852083    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:26.352293    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:26.852062    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:27.352046    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:27.851991    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:28.352173    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:28.851976    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:29.352173    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:29.851943    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:30.352016    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:30.851904    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:31.351923    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:31.851905    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:32.351835    1442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 03:34:32.388500    1442 kubeadm.go:1081] duration metric: took 12.671972458s to wait for elevateKubeSystemPrivileges.
	I0821 03:34:32.388516    1442 kubeadm.go:406] StartCluster complete in 20.01278175s
	I0821 03:34:32.388525    1442 settings.go:142] acquiring lock: {Name:mkeb461ec3a6a92ee32ce41e8df63d6759cb2728 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 03:34:32.388680    1442 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/17102-920/kubeconfig
	I0821 03:34:32.388902    1442 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17102-920/kubeconfig: {Name:mk2bc9c64ad130c36a0253707ac2ba3f8fd22371 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 03:34:32.389107    1442 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0821 03:34:32.389147    1442 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:true]
	I0821 03:34:32.389221    1442 addons.go:69] Setting volumesnapshots=true in profile "addons-500000"
	I0821 03:34:32.389227    1442 addons.go:231] Setting addon volumesnapshots=true in "addons-500000"
	I0821 03:34:32.389225    1442 addons.go:69] Setting cloud-spanner=true in profile "addons-500000"
	I0821 03:34:32.389236    1442 addons.go:231] Setting addon cloud-spanner=true in "addons-500000"
	I0821 03:34:32.389251    1442 config.go:182] Loaded profile config "addons-500000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0821 03:34:32.389271    1442 host.go:66] Checking if "addons-500000" exists ...
	I0821 03:34:32.389279    1442 addons.go:69] Setting storage-provisioner=true in profile "addons-500000"
	I0821 03:34:32.389222    1442 addons.go:69] Setting gcp-auth=true in profile "addons-500000"
	I0821 03:34:32.389282    1442 addons.go:231] Setting addon storage-provisioner=true in "addons-500000"
	I0821 03:34:32.389288    1442 mustload.go:65] Loading cluster: addons-500000
	I0821 03:34:32.389299    1442 host.go:66] Checking if "addons-500000" exists ...
	I0821 03:34:32.389299    1442 addons.go:69] Setting inspektor-gadget=true in profile "addons-500000"
	I0821 03:34:32.389327    1442 addons.go:69] Setting registry=true in profile "addons-500000"
	I0821 03:34:32.389360    1442 config.go:182] Loaded profile config "addons-500000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0821 03:34:32.389358    1442 addons.go:69] Setting ingress-dns=true in profile "addons-500000"
	I0821 03:34:32.389378    1442 addons.go:231] Setting addon ingress-dns=true in "addons-500000"
	I0821 03:34:32.389273    1442 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-500000"
	I0821 03:34:32.389396    1442 addons.go:69] Setting ingress=true in profile "addons-500000"
	I0821 03:34:32.389434    1442 host.go:66] Checking if "addons-500000" exists ...
	I0821 03:34:32.389418    1442 addons.go:69] Setting metrics-server=true in profile "addons-500000"
	I0821 03:34:32.389454    1442 addons.go:231] Setting addon metrics-server=true in "addons-500000"
	I0821 03:34:32.389465    1442 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-500000"
	I0821 03:34:32.389506    1442 host.go:66] Checking if "addons-500000" exists ...
	I0821 03:34:32.389519    1442 host.go:66] Checking if "addons-500000" exists ...
	I0821 03:34:32.389271    1442 host.go:66] Checking if "addons-500000" exists ...
	W0821 03:34:32.389564    1442 host.go:54] host status for "addons-500000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/monitor: connect: connection refused
	W0821 03:34:32.389572    1442 addons.go:277] "addons-500000" is not running, setting storage-provisioner=true and skipping enablement (err=<nil>)
	I0821 03:34:32.389347    1442 addons.go:231] Setting addon inspektor-gadget=true in "addons-500000"
	I0821 03:34:32.389693    1442 host.go:66] Checking if "addons-500000" exists ...
	W0821 03:34:32.389757    1442 host.go:54] host status for "addons-500000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/monitor: connect: connection refused
	W0821 03:34:32.389767    1442 addons.go:277] "addons-500000" is not running, setting cloud-spanner=true and skipping enablement (err=<nil>)
	I0821 03:34:32.389367    1442 addons.go:231] Setting addon registry=true in "addons-500000"
	I0821 03:34:32.389786    1442 host.go:66] Checking if "addons-500000" exists ...
	W0821 03:34:32.389790    1442 host.go:54] host status for "addons-500000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/monitor: connect: connection refused
	W0821 03:34:32.389796    1442 addons.go:277] "addons-500000" is not running, setting metrics-server=true and skipping enablement (err=<nil>)
	I0821 03:34:32.389799    1442 addons.go:467] Verifying addon metrics-server=true in "addons-500000"
	W0821 03:34:32.389788    1442 host.go:54] host status for "addons-500000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/monitor: connect: connection refused
	W0821 03:34:32.389803    1442 addons.go:277] "addons-500000" is not running, setting csi-hostpath-driver=true and skipping enablement (err=<nil>)
	I0821 03:34:32.389805    1442 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-500000"
	I0821 03:34:32.389275    1442 addons.go:69] Setting default-storageclass=true in profile "addons-500000"
	I0821 03:34:32.394058    1442 out.go:177] * Verifying csi-hostpath-driver addon...
	I0821 03:34:32.389436    1442 addons.go:231] Setting addon ingress=true in "addons-500000"
	I0821 03:34:32.389868    1442 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-500000"
	W0821 03:34:32.389953    1442 host.go:54] host status for "addons-500000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/monitor: connect: connection refused
	W0821 03:34:32.390033    1442 host.go:54] host status for "addons-500000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/monitor: connect: connection refused
	W0821 03:34:32.390053    1442 host.go:54] host status for "addons-500000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/monitor: connect: connection refused
	I0821 03:34:32.390510    1442 host.go:66] Checking if "addons-500000" exists ...
	I0821 03:34:32.409190    1442 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	W0821 03:34:32.404296    1442 addons.go:277] "addons-500000" is not running, setting ingress-dns=true and skipping enablement (err=<nil>)
	W0821 03:34:32.404342    1442 addons.go:277] "addons-500000" is not running, setting inspektor-gadget=true and skipping enablement (err=<nil>)
	W0821 03:34:32.404346    1442 addons.go:277] "addons-500000" is not running, setting registry=true and skipping enablement (err=<nil>)
	I0821 03:34:32.404410    1442 host.go:66] Checking if "addons-500000" exists ...
	I0821 03:34:32.404764    1442 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0821 03:34:32.413218    1442 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0821 03:34:32.413224    1442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0821 03:34:32.413232    1442 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/id_rsa Username:docker}
	I0821 03:34:32.413266    1442 addons.go:467] Verifying addon registry=true in "addons-500000"
	I0821 03:34:32.418274    1442 out.go:177] * Verifying registry addon...
	I0821 03:34:32.419795    1442 addons.go:231] Setting addon default-storageclass=true in "addons-500000"
	I0821 03:34:32.419868    1442 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-500000" context rescaled to 1 replicas
	I0821 03:34:32.420817    1442 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0821 03:34:32.421498    1442 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0821 03:34:32.421694    1442 host.go:66] Checking if "addons-500000" exists ...
	I0821 03:34:32.421701    1442 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0821 03:34:32.421849    1442 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0821 03:34:32.431173    1442 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0821 03:34:32.440212    1442 out.go:177] * Verifying Kubernetes components...
	I0821 03:34:32.431974    1442 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0821 03:34:32.435186    1442 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=registry
	I0821 03:34:32.444202    1442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0821 03:34:32.444209    1442 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0821 03:34:32.447466    1442 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0821 03:34:32.448196    1442 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.8.1
	I0821 03:34:32.448211    1442 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/id_rsa Username:docker}
	I0821 03:34:32.451292    1442 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0821 03:34:32.451299    1442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16083 bytes)
	I0821 03:34:32.451306    1442 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/id_rsa Username:docker}
	I0821 03:34:32.454351    1442 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0821 03:34:32.454358    1442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0821 03:34:32.485876    1442 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0821 03:34:32.485886    1442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0821 03:34:32.513135    1442 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0821 03:34:32.513147    1442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0821 03:34:32.532036    1442 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0821 03:34:32.532052    1442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0821 03:34:32.537566    1442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0821 03:34:32.542495    1442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0821 03:34:32.548533    1442 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0821 03:34:32.548541    1442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0821 03:34:32.568087    1442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0821 03:34:33.517324    1442 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.069159875s)
	I0821 03:34:33.517338    1442 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.069147125s)
	I0821 03:34:33.517342    1442 start.go:901] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I0821 03:34:33.517808    1442 node_ready.go:35] waiting up to 6m0s for node "addons-500000" to be "Ready" ...
	I0821 03:34:33.519592    1442 node_ready.go:49] node "addons-500000" has status "Ready":"True"
	I0821 03:34:33.519599    1442 node_ready.go:38] duration metric: took 1.779708ms waiting for node "addons-500000" to be "Ready" ...
	I0821 03:34:33.519602    1442 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0821 03:34:33.522687    1442 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-97rp7" in "kube-system" namespace to be "Ready" ...
	I0821 03:34:33.964195    1442 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.421717084s)
	I0821 03:34:33.964211    1442 addons.go:467] Verifying addon ingress=true in "addons-500000"
	I0821 03:34:33.968723    1442 out.go:177] * Verifying ingress addon...
	I0821 03:34:33.964338    1442 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.396275834s)
	W0821 03:34:33.968774    1442 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0821 03:34:33.975741    1442 retry.go:31] will retry after 231.591556ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0821 03:34:33.976141    1442 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0821 03:34:33.984299    1442 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0821 03:34:33.984307    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:33.987720    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:34.207434    1442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0821 03:34:34.491123    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:34.991180    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:35.490538    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:35.534205    1442 pod_ready.go:102] pod "coredns-5d78c9869d-97rp7" in "kube-system" namespace has status "Ready":"False"
	I0821 03:34:35.990628    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:36.490998    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:36.745839    1442 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.5384555s)
	I0821 03:34:36.990793    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:37.491119    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:37.534210    1442 pod_ready.go:102] pod "coredns-5d78c9869d-97rp7" in "kube-system" namespace has status "Ready":"False"
	I0821 03:34:37.990643    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:38.490772    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:38.997287    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:39.008172    1442 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0821 03:34:39.008186    1442 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/id_rsa Username:docker}
	I0821 03:34:39.055480    1442 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0821 03:34:39.064828    1442 addons.go:231] Setting addon gcp-auth=true in "addons-500000"
	I0821 03:34:39.064858    1442 host.go:66] Checking if "addons-500000" exists ...
	I0821 03:34:39.065649    1442 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0821 03:34:39.065660    1442 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/addons-500000/id_rsa Username:docker}
	I0821 03:34:39.100776    1442 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0821 03:34:39.103705    1442 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0821 03:34:39.107726    1442 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0821 03:34:39.107734    1442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0821 03:34:39.113078    1442 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0821 03:34:39.113087    1442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0821 03:34:39.127541    1442 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0821 03:34:39.127551    1442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5412 bytes)
	I0821 03:34:39.133486    1442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0821 03:34:39.491109    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:39.534694    1442 pod_ready.go:102] pod "coredns-5d78c9869d-97rp7" in "kube-system" namespace has status "Ready":"False"
	I0821 03:34:39.629710    1442 addons.go:467] Verifying addon gcp-auth=true in "addons-500000"
	I0821 03:34:39.641410    1442 out.go:177] * Verifying gcp-auth addon...
	I0821 03:34:39.650441    1442 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0821 03:34:39.656554    1442 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0821 03:34:39.656563    1442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 03:34:39.658191    1442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 03:34:39.991177    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:40.161154    1442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 03:34:40.492443    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:40.660810    1442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 03:34:40.990558    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:41.161357    1442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 03:34:41.492269    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:41.534695    1442 pod_ready.go:102] pod "coredns-5d78c9869d-97rp7" in "kube-system" namespace has status "Ready":"False"
	I0821 03:34:41.660947    1442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 03:34:41.990678    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:42.161013    1442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 03:34:42.490658    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:42.660884    1442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 03:34:42.990530    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:43.161042    1442 kapi.go:107] duration metric: took 3.510698166s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0821 03:34:43.165184    1442 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-500000 cluster.
	I0821 03:34:43.169238    1442 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0821 03:34:43.173158    1442 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0821 03:34:43.491145    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:43.534713    1442 pod_ready.go:97] pod "coredns-5d78c9869d-97rp7" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-08-21 03:34:32 -0700 PDT Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-08-21 03:34:32 -0700 PDT Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-08-21 03:34:32 -0700 PDT Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-08-21 03:34:32 -0700 PDT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.105.2 HostIPs:[] PodIP: PodIPs:[] StartTime:2023-08-21 03:34:32 -0700 PDT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerSt
ateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2023-08-21 03:34:33 -0700 PDT,FinishedAt:2023-08-21 03:34:43 -0700 PDT,ContainerID:docker://d9032391cb53f0fa8cfd4e1696eef2d7eb7096ba08423fd5087bb7b4d2fba5ed,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e ContainerID:docker://d9032391cb53f0fa8cfd4e1696eef2d7eb7096ba08423fd5087bb7b4d2fba5ed Started:0x140018d39a0 AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0821 03:34:43.534727    1442 pod_ready.go:81] duration metric: took 10.012309458s waiting for pod "coredns-5d78c9869d-97rp7" in "kube-system" namespace to be "Ready" ...
	E0821 03:34:43.534732    1442 pod_ready.go:66] WaitExtra: waitPodCondition: pod "coredns-5d78c9869d-97rp7" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-08-21 03:34:32 -0700 PDT Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-08-21 03:34:32 -0700 PDT Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-08-21 03:34:32 -0700 PDT Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-08-21 03:34:32 -0700 PDT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.105.2 HostIPs:[] PodIP: PodIPs:[] StartTime:2023-08-21 03:34:32 -0700 PDT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Runnin
g:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2023-08-21 03:34:33 -0700 PDT,FinishedAt:2023-08-21 03:34:43 -0700 PDT,ContainerID:docker://d9032391cb53f0fa8cfd4e1696eef2d7eb7096ba08423fd5087bb7b4d2fba5ed,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e ContainerID:docker://d9032391cb53f0fa8cfd4e1696eef2d7eb7096ba08423fd5087bb7b4d2fba5ed Started:0x140018d39a0 AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0821 03:34:43.534736    1442 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-hbg44" in "kube-system" namespace to be "Ready" ...
	I0821 03:34:43.537136    1442 pod_ready.go:92] pod "coredns-5d78c9869d-hbg44" in "kube-system" namespace has status "Ready":"True"
	I0821 03:34:43.537140    1442 pod_ready.go:81] duration metric: took 2.400375ms waiting for pod "coredns-5d78c9869d-hbg44" in "kube-system" namespace to be "Ready" ...
	I0821 03:34:43.537145    1442 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-500000" in "kube-system" namespace to be "Ready" ...
	I0821 03:34:43.539758    1442 pod_ready.go:92] pod "etcd-addons-500000" in "kube-system" namespace has status "Ready":"True"
	I0821 03:34:43.539762    1442 pod_ready.go:81] duration metric: took 2.614916ms waiting for pod "etcd-addons-500000" in "kube-system" namespace to be "Ready" ...
	I0821 03:34:43.539766    1442 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-500000" in "kube-system" namespace to be "Ready" ...
	I0821 03:34:43.542039    1442 pod_ready.go:92] pod "kube-apiserver-addons-500000" in "kube-system" namespace has status "Ready":"True"
	I0821 03:34:43.542045    1442 pod_ready.go:81] duration metric: took 2.276584ms waiting for pod "kube-apiserver-addons-500000" in "kube-system" namespace to be "Ready" ...
	I0821 03:34:43.542049    1442 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-500000" in "kube-system" namespace to be "Ready" ...
	I0821 03:34:43.544341    1442 pod_ready.go:92] pod "kube-controller-manager-addons-500000" in "kube-system" namespace has status "Ready":"True"
	I0821 03:34:43.544345    1442 pod_ready.go:81] duration metric: took 2.2935ms waiting for pod "kube-controller-manager-addons-500000" in "kube-system" namespace to be "Ready" ...
	I0821 03:34:43.544348    1442 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-z2wj9" in "kube-system" namespace to be "Ready" ...
	I0821 03:34:43.933736    1442 pod_ready.go:92] pod "kube-proxy-z2wj9" in "kube-system" namespace has status "Ready":"True"
	I0821 03:34:43.933748    1442 pod_ready.go:81] duration metric: took 389.407375ms waiting for pod "kube-proxy-z2wj9" in "kube-system" namespace to be "Ready" ...
	I0821 03:34:43.933752    1442 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-500000" in "kube-system" namespace to be "Ready" ...
	I0821 03:34:43.990470    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:44.334535    1442 pod_ready.go:92] pod "kube-scheduler-addons-500000" in "kube-system" namespace has status "Ready":"True"
	I0821 03:34:44.334545    1442 pod_ready.go:81] duration metric: took 400.801125ms waiting for pod "kube-scheduler-addons-500000" in "kube-system" namespace to be "Ready" ...
	I0821 03:34:44.334549    1442 pod_ready.go:38] duration metric: took 10.81524225s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0821 03:34:44.334558    1442 api_server.go:52] waiting for apiserver process to appear ...
	I0821 03:34:44.334639    1442 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0821 03:34:44.339980    1442 api_server.go:72] duration metric: took 11.909098333s to wait for apiserver process to appear ...
	I0821 03:34:44.339987    1442 api_server.go:88] waiting for apiserver healthz status ...
	I0821 03:34:44.339993    1442 api_server.go:253] Checking apiserver healthz at https://192.168.105.2:8443/healthz ...
	I0821 03:34:44.344178    1442 api_server.go:279] https://192.168.105.2:8443/healthz returned 200:
	ok
	I0821 03:34:44.344920    1442 api_server.go:141] control plane version: v1.27.4
	I0821 03:34:44.344925    1442 api_server.go:131] duration metric: took 4.936ms to wait for apiserver health ...
	I0821 03:34:44.344929    1442 system_pods.go:43] waiting for kube-system pods to appear ...
	I0821 03:34:44.490452    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:44.535983    1442 system_pods.go:59] 8 kube-system pods found
	I0821 03:34:44.535991    1442 system_pods.go:61] "coredns-5d78c9869d-hbg44" [2212048e-385c-4235-ad14-1b9e4e812106] Running
	I0821 03:34:44.535994    1442 system_pods.go:61] "etcd-addons-500000" [dcde2eed-b2a3-4b2d-af51-14d42189714c] Running
	I0821 03:34:44.536011    1442 system_pods.go:61] "kube-apiserver-addons-500000" [a4c38aeb-a7ef-4239-ac34-2437f9c67d96] Running
	I0821 03:34:44.536015    1442 system_pods.go:61] "kube-controller-manager-addons-500000" [972b1e42-cd56-4f77-ad52-a1df2b79fdae] Running
	I0821 03:34:44.536018    1442 system_pods.go:61] "kube-proxy-z2wj9" [56cdd0e9-2b8f-476e-be08-a52381eecb16] Running
	I0821 03:34:44.536020    1442 system_pods.go:61] "kube-scheduler-addons-500000" [c2d2f1e5-45c6-48a9-990d-7e32d9d75976] Running
	I0821 03:34:44.536022    1442 system_pods.go:61] "snapshot-controller-75bbb956b9-4pgqh" [7452ce04-2fbb-4f7a-9e5f-87b8b577fc94] Running
	I0821 03:34:44.536025    1442 system_pods.go:61] "snapshot-controller-75bbb956b9-j9mkf" [dbd2a297-29a5-4435-8fb1-849d8ae91771] Running
	I0821 03:34:44.536028    1442 system_pods.go:74] duration metric: took 191.1015ms to wait for pod list to return data ...
	I0821 03:34:44.536033    1442 default_sa.go:34] waiting for default service account to be created ...
	I0821 03:34:44.734042    1442 default_sa.go:45] found service account: "default"
	I0821 03:34:44.734051    1442 default_sa.go:55] duration metric: took 198.020583ms for default service account to be created ...
	I0821 03:34:44.734055    1442 system_pods.go:116] waiting for k8s-apps to be running ...
	I0821 03:34:44.935348    1442 system_pods.go:86] 8 kube-system pods found
	I0821 03:34:44.935359    1442 system_pods.go:89] "coredns-5d78c9869d-hbg44" [2212048e-385c-4235-ad14-1b9e4e812106] Running
	I0821 03:34:44.935362    1442 system_pods.go:89] "etcd-addons-500000" [dcde2eed-b2a3-4b2d-af51-14d42189714c] Running
	I0821 03:34:44.935365    1442 system_pods.go:89] "kube-apiserver-addons-500000" [a4c38aeb-a7ef-4239-ac34-2437f9c67d96] Running
	I0821 03:34:44.935367    1442 system_pods.go:89] "kube-controller-manager-addons-500000" [972b1e42-cd56-4f77-ad52-a1df2b79fdae] Running
	I0821 03:34:44.935369    1442 system_pods.go:89] "kube-proxy-z2wj9" [56cdd0e9-2b8f-476e-be08-a52381eecb16] Running
	I0821 03:34:44.935372    1442 system_pods.go:89] "kube-scheduler-addons-500000" [c2d2f1e5-45c6-48a9-990d-7e32d9d75976] Running
	I0821 03:34:44.935374    1442 system_pods.go:89] "snapshot-controller-75bbb956b9-4pgqh" [7452ce04-2fbb-4f7a-9e5f-87b8b577fc94] Running
	I0821 03:34:44.935376    1442 system_pods.go:89] "snapshot-controller-75bbb956b9-j9mkf" [dbd2a297-29a5-4435-8fb1-849d8ae91771] Running
	I0821 03:34:44.935380    1442 system_pods.go:126] duration metric: took 201.327917ms to wait for k8s-apps to be running ...
	I0821 03:34:44.935391    1442 system_svc.go:44] waiting for kubelet service to be running ....
	I0821 03:34:44.935475    1442 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0821 03:34:44.941643    1442 system_svc.go:56] duration metric: took 6.252209ms WaitForService to wait for kubelet.
	I0821 03:34:44.941651    1442 kubeadm.go:581] duration metric: took 12.5107865s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0821 03:34:44.941660    1442 node_conditions.go:102] verifying NodePressure condition ...
	I0821 03:34:44.990746    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:45.134674    1442 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0821 03:34:45.134706    1442 node_conditions.go:123] node cpu capacity is 2
	I0821 03:34:45.134712    1442 node_conditions.go:105] duration metric: took 193.055083ms to run NodePressure ...
	I0821 03:34:45.134717    1442 start.go:228] waiting for startup goroutines ...
	I0821 03:34:45.490470    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:45.990643    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:46.490327    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:46.990587    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:47.490536    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:47.990358    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:48.490279    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:48.990490    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:49.490328    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:49.990414    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:50.490337    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:50.990260    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:51.490639    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:51.989843    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:52.490813    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:52.990112    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:53.491005    1442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 03:34:53.992627    1442 kapi.go:107] duration metric: took 20.017033875s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0821 03:40:32.405313    1442 kapi.go:107] duration metric: took 6m0.010490834s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	W0821 03:40:32.405643    1442 out.go:239] ! Enabling 'csi-hostpath-driver' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=csi-hostpath-driver pods: context deadline exceeded]
	I0821 03:40:32.421828    1442 kapi.go:107] duration metric: took 6m0.009978583s to wait for kubernetes.io/minikube-addons=registry ...
	W0821 03:40:32.421921    1442 out.go:239] ! Enabling 'registry' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=registry pods: context deadline exceeded]
	I0821 03:40:32.430174    1442 out.go:177] * Enabled addons: storage-provisioner, cloud-spanner, metrics-server, ingress-dns, inspektor-gadget, default-storageclass, volumesnapshots, gcp-auth, ingress
	I0821 03:40:32.437176    1442 addons.go:502] enable addons completed in 6m0.058033333s: enabled=[storage-provisioner cloud-spanner metrics-server ingress-dns inspektor-gadget default-storageclass volumesnapshots gcp-auth ingress]
	I0821 03:40:32.437214    1442 start.go:233] waiting for cluster config update ...
	I0821 03:40:32.437252    1442 start.go:242] writing updated cluster config ...
	I0821 03:40:32.438394    1442 ssh_runner.go:195] Run: rm -f paused
	I0821 03:40:32.505190    1442 start.go:600] kubectl: 1.27.2, cluster: 1.27.4 (minor skew: 0)
	I0821 03:40:32.509248    1442 out.go:177] * Done! kubectl is now configured to use "addons-500000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Mon 2023-08-21 10:34:00 UTC, ends at Mon 2023-08-21 10:54:25 UTC. --
	Aug 21 10:34:41 addons-500000 dockerd[1153]: time="2023-08-21T10:34:41.956624254Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 21 10:34:42 addons-500000 cri-dockerd[1049]: time="2023-08-21T10:34:42Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/bbb4a4c960656b62bb19b9b067c655ea39e12d8756d8701729b8421b997616a1/resolv.conf as [nameserver 10.96.0.10 search ingress-nginx.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Aug 21 10:34:42 addons-500000 cri-dockerd[1049]: time="2023-08-21T10:34:42Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf: Status: Downloaded newer image for gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf"
	Aug 21 10:34:42 addons-500000 dockerd[1148]: time="2023-08-21T10:34:42.514519077Z" level=warning msg="reference for unknown type: " digest="sha256:e5c4824e7375fcf2a393e1c03c293b69759af37a9ca6abdb91b13d78a93da8bd" remote="registry.k8s.io/ingress-nginx/controller@sha256:e5c4824e7375fcf2a393e1c03c293b69759af37a9ca6abdb91b13d78a93da8bd"
	Aug 21 10:34:42 addons-500000 dockerd[1153]: time="2023-08-21T10:34:42.565577154Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 21 10:34:42 addons-500000 dockerd[1153]: time="2023-08-21T10:34:42.565634689Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 21 10:34:42 addons-500000 dockerd[1153]: time="2023-08-21T10:34:42.565652592Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 21 10:34:42 addons-500000 dockerd[1153]: time="2023-08-21T10:34:42.565663687Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 21 10:34:43 addons-500000 dockerd[1153]: time="2023-08-21T10:34:43.460515395Z" level=info msg="shim disconnected" id=d9032391cb53f0fa8cfd4e1696eef2d7eb7096ba08423fd5087bb7b4d2fba5ed namespace=moby
	Aug 21 10:34:43 addons-500000 dockerd[1153]: time="2023-08-21T10:34:43.460544530Z" level=warning msg="cleaning up after shim disconnected" id=d9032391cb53f0fa8cfd4e1696eef2d7eb7096ba08423fd5087bb7b4d2fba5ed namespace=moby
	Aug 21 10:34:43 addons-500000 dockerd[1153]: time="2023-08-21T10:34:43.460548812Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 21 10:34:43 addons-500000 dockerd[1148]: time="2023-08-21T10:34:43.460463883Z" level=info msg="ignoring event" container=d9032391cb53f0fa8cfd4e1696eef2d7eb7096ba08423fd5087bb7b4d2fba5ed module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 21 10:34:43 addons-500000 dockerd[1153]: time="2023-08-21T10:34:43.550734250Z" level=info msg="shim disconnected" id=3c57b48b5f08f4ead2c53d0b29e10a8a3dc35318069e85faa762b9ff0597901d namespace=moby
	Aug 21 10:34:43 addons-500000 dockerd[1148]: time="2023-08-21T10:34:43.550868047Z" level=info msg="ignoring event" container=3c57b48b5f08f4ead2c53d0b29e10a8a3dc35318069e85faa762b9ff0597901d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 21 10:34:43 addons-500000 dockerd[1153]: time="2023-08-21T10:34:43.550901548Z" level=warning msg="cleaning up after shim disconnected" id=3c57b48b5f08f4ead2c53d0b29e10a8a3dc35318069e85faa762b9ff0597901d namespace=moby
	Aug 21 10:34:43 addons-500000 dockerd[1153]: time="2023-08-21T10:34:43.550916158Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 21 10:34:52 addons-500000 cri-dockerd[1049]: time="2023-08-21T10:34:52Z" level=info msg="Pulling image registry.k8s.io/ingress-nginx/controller:v1.8.1@sha256:e5c4824e7375fcf2a393e1c03c293b69759af37a9ca6abdb91b13d78a93da8bd: df2bdb71e370: Extracting [=====================================>             ]  8.782MB/11.56MB"
	Aug 21 10:34:52 addons-500000 dockerd[1148]: time="2023-08-21T10:34:52.972147755Z" level=warning msg="ignored xattrs in archive: underlying filesystem doesn't support them" errors="[operation not supported]"
	Aug 21 10:34:52 addons-500000 dockerd[1148]: time="2023-08-21T10:34:52.973540499Z" level=warning msg="ignored xattrs in archive: underlying filesystem doesn't support them" errors="[operation not supported]"
	Aug 21 10:34:53 addons-500000 dockerd[1148]: time="2023-08-21T10:34:53.079609792Z" level=warning msg="ignored xattrs in archive: underlying filesystem doesn't support them" errors="[operation not supported]"
	Aug 21 10:34:53 addons-500000 cri-dockerd[1049]: time="2023-08-21T10:34:53Z" level=info msg="Stop pulling image registry.k8s.io/ingress-nginx/controller:v1.8.1@sha256:e5c4824e7375fcf2a393e1c03c293b69759af37a9ca6abdb91b13d78a93da8bd: Status: Downloaded newer image for registry.k8s.io/ingress-nginx/controller@sha256:e5c4824e7375fcf2a393e1c03c293b69759af37a9ca6abdb91b13d78a93da8bd"
	Aug 21 10:34:53 addons-500000 dockerd[1153]: time="2023-08-21T10:34:53.201046831Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 21 10:34:53 addons-500000 dockerd[1153]: time="2023-08-21T10:34:53.201094050Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 21 10:34:53 addons-500000 dockerd[1153]: time="2023-08-21T10:34:53.201110708Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 21 10:34:53 addons-500000 dockerd[1153]: time="2023-08-21T10:34:53.201117263Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                         ATTEMPT             POD ID
	734d7d69c9e8b       registry.k8s.io/ingress-nginx/controller@sha256:e5c4824e7375fcf2a393e1c03c293b69759af37a9ca6abdb91b13d78a93da8bd             19 minutes ago      Running             controller                   0                   bbb4a4c960656
	dbe5746b118a6       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf                 19 minutes ago      Running             gcp-auth                     0                   31154fc41fc35
	fc5767357c5d9       8f2588812ab29                                                                                                                19 minutes ago      Exited              patch                        1                   0538e79b5c883
	aa7d89a7d68d0       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:543c40fd093964bc9ab509d3e791f9989963021f1e9e4c9c7b6700b02bfb227b   19 minutes ago      Exited              create                       0                   3c078f4b9885e
	7979593c9bb52       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280      19 minutes ago      Running             volume-snapshot-controller   0                   70a68685a69fb
	fe9609fabef21       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280      19 minutes ago      Running             volume-snapshot-controller   0                   39eda7944d576
	16cfb4c805080       97e04611ad434                                                                                                                19 minutes ago      Running             coredns                      0                   b6fa8f87ea743
	36558206e7ebf       532e5a30e948f                                                                                                                19 minutes ago      Running             kube-proxy                   0                   ccc8633d52ca6
	bd48baf71b163       6eb63895cb67f                                                                                                                20 minutes ago      Running             kube-scheduler               0                   65c9ea48d27ae
	27dc2c0d7a4a5       24bc64e911039                                                                                                                20 minutes ago      Running             etcd                         0                   0f2cdc52bbda6
	dc949a6ce14c1       64aece92d6bde                                                                                                                20 minutes ago      Running             kube-apiserver               0                   090daa0e10080
	41982c5e9fc8f       389f6f052cf83                                                                                                                20 minutes ago      Running             kube-controller-manager      0                   a9c3d15b86bf8
	
	* 
	* ==> controller_ingress [734d7d69c9e8] <==
	*   Build:         dc88dce9ea5e700f3301d16f971fa17c6cfe757d
	  Repository:    https://github.com/kubernetes/ingress-nginx
	  nginx version: nginx/1.21.6
	
	-------------------------------------------------------------------------------
	
	W0821 10:34:53.255429       6 client_config.go:618] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
	I0821 10:34:53.255517       6 main.go:209] "Creating API client" host="https://10.96.0.1:443"
	I0821 10:34:53.259720       6 main.go:253] "Running in Kubernetes cluster" major="1" minor="27" git="v1.27.4" state="clean" commit="fa3d7990104d7c1f16943a67f11b154b71f6a132" platform="linux/arm64"
	I0821 10:34:53.370154       6 main.go:104] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
	I0821 10:34:53.376568       6 ssl.go:533] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
	I0821 10:34:53.385083       6 nginx.go:261] "Starting NGINX Ingress controller"
	I0821 10:34:53.389190       6 event.go:285] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"5b999e5a-759f-47c2-858b-4e3d79b34cbe", APIVersion:"v1", ResourceVersion:"433", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
	I0821 10:34:53.391567       6 event.go:285] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"a91d48bb-075d-496f-a947-fa3bf3c2ef7e", APIVersion:"v1", ResourceVersion:"434", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
	I0821 10:34:53.391592       6 event.go:285] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"5124232c-77f2-4a7f-a11f-9600873ca980", APIVersion:"v1", ResourceVersion:"435", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
	I0821 10:34:54.586254       6 nginx.go:304] "Starting NGINX process"
	I0821 10:34:54.586524       6 leaderelection.go:248] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
	I0821 10:34:54.587191       6 nginx.go:324] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
	I0821 10:34:54.588124       6 controller.go:190] "Configuration changes detected, backend reload required"
	I0821 10:34:54.605898       6 leaderelection.go:258] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I0821 10:34:54.606668       6 status.go:84] "New leader elected" identity="ingress-nginx-controller-7799c6795f-4ppd9"
	I0821 10:34:54.622098       6 status.go:215] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-7799c6795f-4ppd9" node="addons-500000"
	I0821 10:34:54.663825       6 controller.go:207] "Backend successfully reloaded"
	I0821 10:34:54.663941       6 controller.go:218] "Initial sync, sleeping for 1 second"
	I0821 10:34:54.664013       6 event.go:285] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7799c6795f-4ppd9", UID:"c950764c-9601-4c76-adb3-ddb61bd6335d", APIVersion:"v1", ResourceVersion:"458", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	
	* 
	* ==> coredns [16cfb4c80508] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	[INFO] Reloading complete
	[INFO] 127.0.0.1:52450 - 49271 "HINFO IN 1467224369207536570.5830207891825585757. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.005303742s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-500000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-500000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43
	                    minikube.k8s.io/name=addons-500000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_08_21T03_34_19_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 21 Aug 2023 10:34:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-500000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 21 Aug 2023 10:54:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 21 Aug 2023 10:50:40 +0000   Mon, 21 Aug 2023 10:34:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 21 Aug 2023 10:50:40 +0000   Mon, 21 Aug 2023 10:34:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 21 Aug 2023 10:50:40 +0000   Mon, 21 Aug 2023 10:34:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 21 Aug 2023 10:50:40 +0000   Mon, 21 Aug 2023 10:34:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.2
	  Hostname:    addons-500000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	System Info:
	  Machine ID:                 0e4a1f71467c44c8a10eca186773afe2
	  System UUID:                0e4a1f71467c44c8a10eca186773afe2
	  Boot ID:                    6d5e7ffc-fb7d-41fe-b076-69fd8535d300
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.4
	  Kubelet Version:            v1.27.4
	  Kube-Proxy Version:         v1.27.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  gcp-auth                    gcp-auth-58478865f7-zcg47                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  ingress-nginx               ingress-nginx-controller-7799c6795f-4ppd9    100m (5%!)(MISSING)     0 (0%!)(MISSING)      90Mi (2%!)(MISSING)        0 (0%!)(MISSING)         19m
	  kube-system                 coredns-5d78c9869d-hbg44                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     19m
	  kube-system                 etcd-addons-500000                           100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         20m
	  kube-system                 kube-apiserver-addons-500000                 250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 kube-controller-manager-addons-500000        200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 kube-proxy-z2wj9                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-scheduler-addons-500000                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 snapshot-controller-75bbb956b9-4pgqh         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 snapshot-controller-75bbb956b9-j9mkf         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  0 (0%!)(MISSING)
	  memory             260Mi (6%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 19m   kube-proxy       
	  Normal  Starting                 20m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  20m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  20m   kubelet          Node addons-500000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20m   kubelet          Node addons-500000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20m   kubelet          Node addons-500000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                20m   kubelet          Node addons-500000 status is now: NodeReady
	  Normal  RegisteredNode           19m   node-controller  Node addons-500000 event: Registered Node addons-500000 in Controller
	
	* 
	* ==> dmesg <==
	* [Aug21 10:33] ACPI: SRAT not present
	[  +0.000000] KASLR disabled due to lack of seed
	[  +0.638012] EINJ: EINJ table not found.
	[  +0.490829] systemd-fstab-generator[117]: Ignoring "noauto" for root device
	[  +0.044680] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000871] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[Aug21 10:34] systemd-fstab-generator[479]: Ignoring "noauto" for root device
	[  +0.063431] systemd-fstab-generator[490]: Ignoring "noauto" for root device
	[  +0.413293] systemd-fstab-generator[750]: Ignoring "noauto" for root device
	[  +0.194883] systemd-fstab-generator[786]: Ignoring "noauto" for root device
	[  +0.079334] systemd-fstab-generator[797]: Ignoring "noauto" for root device
	[  +0.075319] systemd-fstab-generator[810]: Ignoring "noauto" for root device
	[  +1.241580] systemd-fstab-generator[968]: Ignoring "noauto" for root device
	[  +0.080868] systemd-fstab-generator[979]: Ignoring "noauto" for root device
	[  +0.070572] systemd-fstab-generator[990]: Ignoring "noauto" for root device
	[  +0.067357] systemd-fstab-generator[1001]: Ignoring "noauto" for root device
	[  +0.069942] systemd-fstab-generator[1042]: Ignoring "noauto" for root device
	[  +2.503453] systemd-fstab-generator[1141]: Ignoring "noauto" for root device
	[  +2.381640] kauditd_printk_skb: 53 callbacks suppressed
	[  +2.661766] systemd-fstab-generator[1457]: Ignoring "noauto" for root device
	[  +5.156537] systemd-fstab-generator[2350]: Ignoring "noauto" for root device
	[ +13.738428] kauditd_printk_skb: 41 callbacks suppressed
	[  +1.700338] TCP: eth0: Driver has suspect GRO implementation, TCP performance may be compromised.
	[  +4.800757] kauditd_printk_skb: 48 callbacks suppressed
	[ +14.143799] kauditd_printk_skb: 54 callbacks suppressed
	
	* 
	* ==> etcd [27dc2c0d7a4a] <==
	* {"level":"info","ts":"2023-08-21T10:34:15.986Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 received MsgPreVoteResp from c46d288d2fcb0590 at term 1"}
	{"level":"info","ts":"2023-08-21T10:34:15.986Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 became candidate at term 2"}
	{"level":"info","ts":"2023-08-21T10:34:15.986Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 received MsgVoteResp from c46d288d2fcb0590 at term 2"}
	{"level":"info","ts":"2023-08-21T10:34:15.986Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 became leader at term 2"}
	{"level":"info","ts":"2023-08-21T10:34:15.986Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c46d288d2fcb0590 elected leader c46d288d2fcb0590 at term 2"}
	{"level":"info","ts":"2023-08-21T10:34:15.991Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"c46d288d2fcb0590","local-member-attributes":"{Name:addons-500000 ClientURLs:[https://192.168.105.2:2379]}","request-path":"/0/members/c46d288d2fcb0590/attributes","cluster-id":"6e03e7863b4f9c54","publish-timeout":"7s"}
	{"level":"info","ts":"2023-08-21T10:34:15.991Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-21T10:34:15.991Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-08-21T10:34:15.991Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-08-21T10:34:15.992Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-08-21T10:34:16.003Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-08-21T10:34:15.992Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-08-21T10:34:16.003Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.105.2:2379"}
	{"level":"info","ts":"2023-08-21T10:34:15.992Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6e03e7863b4f9c54","local-member-id":"c46d288d2fcb0590","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-21T10:34:16.003Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-21T10:34:16.003Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-21T10:44:16.025Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":841}
	{"level":"info","ts":"2023-08-21T10:44:16.028Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":841,"took":"2.672822ms","hash":3376273956}
	{"level":"info","ts":"2023-08-21T10:44:16.028Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3376273956,"revision":841,"compact-revision":-1}
	{"level":"info","ts":"2023-08-21T10:49:16.035Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1031}
	{"level":"info","ts":"2023-08-21T10:49:16.038Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1031,"took":"1.375633ms","hash":1895539758}
	{"level":"info","ts":"2023-08-21T10:49:16.038Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1895539758,"revision":1031,"compact-revision":841}
	{"level":"info","ts":"2023-08-21T10:54:16.045Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1222}
	{"level":"info","ts":"2023-08-21T10:54:16.047Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1222,"took":"1.459351ms","hash":3279763987}
	{"level":"info","ts":"2023-08-21T10:54:16.047Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3279763987,"revision":1222,"compact-revision":1031}
	
	* 
	* ==> gcp-auth [dbe5746b118a] <==
	* 2023/08/21 10:34:42 GCP Auth Webhook started!
	
	* 
	* ==> kernel <==
	*  10:54:25 up 20 min,  0 users,  load average: 0.37, 0.34, 0.28
	Linux addons-500000 5.10.57 #1 SMP PREEMPT Fri Jul 14 22:49:12 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [dc949a6ce14c] <==
	* I0821 10:34:39.583629       1 alloc.go:330] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs=map[IPv4:10.110.39.22]
	I0821 10:39:16.746832       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0821 10:39:16.747262       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0821 10:39:16.747727       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0821 10:39:16.747921       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0821 10:39:16.759280       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0821 10:39:16.759360       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0821 10:44:16.754789       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0821 10:44:16.754844       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0821 10:44:16.754880       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0821 10:44:16.754904       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0821 10:44:16.755317       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0821 10:44:16.755352       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0821 10:49:16.748790       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0821 10:49:16.749408       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0821 10:49:16.759393       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0821 10:49:16.759510       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0821 10:49:16.766063       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0821 10:49:16.766169       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0821 10:54:16.749624       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0821 10:54:16.750123       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0821 10:54:16.755478       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0821 10:54:16.755644       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0821 10:54:16.765351       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0821 10:54:16.765428       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	
	* 
	* ==> kube-controller-manager [41982c5e9fc8] <==
	* I0821 10:34:42.731971       1 job_controller.go:523] enqueueing job ingress-nginx/ingress-nginx-admission-patch
	I0821 10:34:42.736066       1 job_controller.go:523] enqueueing job ingress-nginx/ingress-nginx-admission-patch
	I0821 10:34:42.737082       1 event.go:307] "Event occurred" object="ingress-nginx/ingress-nginx-admission-patch" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0821 10:34:42.747456       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0821 10:34:42.752783       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0821 10:34:42.756485       1 job_controller.go:523] enqueueing job ingress-nginx/ingress-nginx-admission-patch
	I0821 10:34:42.854473       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0821 10:34:42.856753       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0821 10:34:42.858553       1 event.go:307] "Event occurred" object="gcp-auth/gcp-auth-certs-patch" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0821 10:34:42.858609       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0821 10:34:42.859646       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0821 10:34:42.893612       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0821 10:34:42.895861       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0821 10:34:42.897862       1 event.go:307] "Event occurred" object="gcp-auth/gcp-auth-certs-create" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0821 10:34:42.897954       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0821 10:34:42.899189       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0821 10:35:01.688712       1 resource_quota_monitor.go:223] "QuotaMonitor created object count evaluator" resource="volumesnapshots.snapshot.storage.k8s.io"
	I0821 10:35:01.688853       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0821 10:35:01.789717       1 shared_informer.go:318] Caches are synced for resource quota
	I0821 10:35:02.109377       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0821 10:35:02.210585       1 shared_informer.go:318] Caches are synced for garbage collector
	I0821 10:35:12.010356       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0821 10:35:12.011197       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0821 10:35:12.022044       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0821 10:35:12.024702       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	
	* 
	* ==> kube-proxy [36558206e7eb] <==
	* I0821 10:34:32.961845       1 node.go:141] Successfully retrieved node IP: 192.168.105.2
	I0821 10:34:32.961903       1 server_others.go:110] "Detected node IP" address="192.168.105.2"
	I0821 10:34:32.961922       1 server_others.go:554] "Using iptables proxy"
	I0821 10:34:32.984111       1 server_others.go:178] "kube-proxy running in single-stack mode: secondary ipFamily is not supported" ipFamily=IPv6
	I0821 10:34:32.984124       1 server_others.go:192] "Using iptables Proxier"
	I0821 10:34:32.984147       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0821 10:34:32.984347       1 server.go:658] "Version info" version="v1.27.4"
	I0821 10:34:32.984357       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0821 10:34:32.984958       1 config.go:315] "Starting node config controller"
	I0821 10:34:32.984965       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0821 10:34:32.985291       1 config.go:188] "Starting service config controller"
	I0821 10:34:32.985295       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0821 10:34:32.985301       1 config.go:97] "Starting endpoint slice config controller"
	I0821 10:34:32.985318       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0821 10:34:33.085576       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0821 10:34:33.085604       1 shared_informer.go:318] Caches are synced for node config
	I0821 10:34:33.085608       1 shared_informer.go:318] Caches are synced for service config
	
	* 
	* ==> kube-scheduler [bd48baf71b16] <==
	* W0821 10:34:16.768490       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0821 10:34:16.768493       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0821 10:34:16.768508       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0821 10:34:16.768511       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0821 10:34:16.768562       1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0821 10:34:16.768566       1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0821 10:34:17.606010       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0821 10:34:17.606029       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0821 10:34:17.645166       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0821 10:34:17.645193       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0821 10:34:17.674598       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0821 10:34:17.674623       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0821 10:34:17.707767       1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0821 10:34:17.707781       1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0821 10:34:17.724040       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0821 10:34:17.724057       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0821 10:34:17.728085       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0821 10:34:17.728146       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0821 10:34:17.756871       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0821 10:34:17.756889       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0821 10:34:17.785527       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0821 10:34:17.785576       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0821 10:34:17.785527       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0821 10:34:17.785647       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0821 10:34:20.949364       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-08-21 10:34:00 UTC, ends at Mon 2023-08-21 10:54:25 UTC. --
	Aug 21 10:49:19 addons-500000 kubelet[2369]: E0821 10:49:19.565825    2369 iptables.go:575] "Could not set up iptables canary" err=<
	Aug 21 10:49:19 addons-500000 kubelet[2369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 21 10:49:19 addons-500000 kubelet[2369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 21 10:49:19 addons-500000 kubelet[2369]:  > table=nat chain=KUBE-KUBELET-CANARY
	Aug 21 10:50:19 addons-500000 kubelet[2369]: E0821 10:50:19.566360    2369 iptables.go:575] "Could not set up iptables canary" err=<
	Aug 21 10:50:19 addons-500000 kubelet[2369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 21 10:50:19 addons-500000 kubelet[2369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 21 10:50:19 addons-500000 kubelet[2369]:  > table=nat chain=KUBE-KUBELET-CANARY
	Aug 21 10:51:19 addons-500000 kubelet[2369]: E0821 10:51:19.566744    2369 iptables.go:575] "Could not set up iptables canary" err=<
	Aug 21 10:51:19 addons-500000 kubelet[2369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 21 10:51:19 addons-500000 kubelet[2369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 21 10:51:19 addons-500000 kubelet[2369]:  > table=nat chain=KUBE-KUBELET-CANARY
	Aug 21 10:52:19 addons-500000 kubelet[2369]: E0821 10:52:19.565301    2369 iptables.go:575] "Could not set up iptables canary" err=<
	Aug 21 10:52:19 addons-500000 kubelet[2369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 21 10:52:19 addons-500000 kubelet[2369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 21 10:52:19 addons-500000 kubelet[2369]:  > table=nat chain=KUBE-KUBELET-CANARY
	Aug 21 10:53:19 addons-500000 kubelet[2369]: E0821 10:53:19.565636    2369 iptables.go:575] "Could not set up iptables canary" err=<
	Aug 21 10:53:19 addons-500000 kubelet[2369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 21 10:53:19 addons-500000 kubelet[2369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 21 10:53:19 addons-500000 kubelet[2369]:  > table=nat chain=KUBE-KUBELET-CANARY
	Aug 21 10:54:19 addons-500000 kubelet[2369]: W0821 10:54:19.460164    2369 machine.go:65] Cannot read vendor id correctly, set empty.
	Aug 21 10:54:19 addons-500000 kubelet[2369]: E0821 10:54:19.566314    2369 iptables.go:575] "Could not set up iptables canary" err=<
	Aug 21 10:54:19 addons-500000 kubelet[2369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 21 10:54:19 addons-500000 kubelet[2369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 21 10:54:19 addons-500000 kubelet[2369]:  > table=nat chain=KUBE-KUBELET-CANARY
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p addons-500000 -n addons-500000
helpers_test.go:261: (dbg) Run:  kubectl --context addons-500000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-cxgb2 ingress-nginx-admission-patch-fkwhp
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/CloudSpanner]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-500000 describe pod ingress-nginx-admission-create-cxgb2 ingress-nginx-admission-patch-fkwhp
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-500000 describe pod ingress-nginx-admission-create-cxgb2 ingress-nginx-admission-patch-fkwhp: exit status 1 (36.080208ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-cxgb2" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-fkwhp" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-500000 describe pod ingress-nginx-admission-create-cxgb2 ingress-nginx-admission-patch-fkwhp: exit status 1
--- FAIL: TestAddons/parallel/CloudSpanner (832.89s)

                                                
                                    
x
+
TestAddons/serial (0s)

                                                
                                                
=== RUN   TestAddons/serial
addons_test.go:138: Unable to run more tests (deadline exceeded)
--- FAIL: TestAddons/serial (0.00s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (0s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:148: (dbg) Run:  out/minikube-darwin-arm64 stop -p addons-500000
addons_test.go:148: (dbg) Non-zero exit: out/minikube-darwin-arm64 stop -p addons-500000: context deadline exceeded (500ns)
addons_test.go:150: failed to stop minikube. args "out/minikube-darwin-arm64 stop -p addons-500000" : context deadline exceeded
addons_test.go:152: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-500000
addons_test.go:152: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-500000: context deadline exceeded (83ns)
addons_test.go:154: failed to enable dashboard addon: args "out/minikube-darwin-arm64 addons enable dashboard -p addons-500000" : context deadline exceeded
addons_test.go:156: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-500000
addons_test.go:156: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-500000: context deadline exceeded (42ns)
addons_test.go:158: failed to disable dashboard addon: args "out/minikube-darwin-arm64 addons disable dashboard -p addons-500000" : context deadline exceeded
addons_test.go:161: (dbg) Run:  out/minikube-darwin-arm64 addons disable gvisor -p addons-500000
addons_test.go:161: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable gvisor -p addons-500000: context deadline exceeded (42ns)
addons_test.go:163: failed to disable non-enabled addon: args "out/minikube-darwin-arm64 addons disable gvisor -p addons-500000" : context deadline exceeded
--- FAIL: TestAddons/StoppedEnableDisable (0.00s)

                                                
                                    
x
+
TestCertOptions (10.08s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-591000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-591000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.803877084s)

                                                
                                                
-- stdout --
	* [cert-options-591000] minikube v1.31.2 on Darwin 13.5 (arm64)
	  - MINIKUBE_LOCATION=17102
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17102-920/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17102-920/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node cert-options-591000 in cluster cert-options-591000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-591000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-591000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-591000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-591000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-591000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 89 (78.288292ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p cert-options-591000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-591000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 89
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-591000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-591000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-591000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 89 (38.260084ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p cert-options-591000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-591000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 89
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p cert-options-591000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2023-08-21 04:26:24.075911 -0700 PDT m=+3189.088640084
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-591000 -n cert-options-591000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-591000 -n cert-options-591000: exit status 7 (28.850125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-591000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-591000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-591000
--- FAIL: TestCertOptions (10.08s)

                                                
                                    
x
+
TestCertExpiration (195.25s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-150000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-150000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (9.846973708s)

                                                
                                                
-- stdout --
	* [cert-expiration-150000] minikube v1.31.2 on Darwin 13.5 (arm64)
	  - MINIKUBE_LOCATION=17102
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17102-920/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17102-920/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node cert-expiration-150000 in cluster cert-expiration-150000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-150000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-150000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-150000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-150000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-150000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.230648375s)

                                                
                                                
-- stdout --
	* [cert-expiration-150000] minikube v1.31.2 on Darwin 13.5 (arm64)
	  - MINIKUBE_LOCATION=17102
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17102-920/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17102-920/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node cert-expiration-150000 in cluster cert-expiration-150000
	* Restarting existing qemu2 VM for "cert-expiration-150000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-150000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-150000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-150000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-150000] minikube v1.31.2 on Darwin 13.5 (arm64)
	  - MINIKUBE_LOCATION=17102
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17102-920/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17102-920/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node cert-expiration-150000 in cluster cert-expiration-150000
	* Restarting existing qemu2 VM for "cert-expiration-150000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-150000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-150000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2023-08-21 04:29:24.108532 -0700 PDT m=+3369.124692834
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-150000 -n cert-expiration-150000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-150000 -n cert-expiration-150000: exit status 7 (65.95175ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-150000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-150000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-150000
--- FAIL: TestCertExpiration (195.25s)

                                                
                                    
x
+
TestDockerFlags (9.94s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-681000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-681000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.686361584s)

                                                
                                                
-- stdout --
	* [docker-flags-681000] minikube v1.31.2 on Darwin 13.5 (arm64)
	  - MINIKUBE_LOCATION=17102
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17102-920/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17102-920/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node docker-flags-681000 in cluster docker-flags-681000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-681000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0821 04:26:04.212393    4322 out.go:296] Setting OutFile to fd 1 ...
	I0821 04:26:04.212507    4322 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 04:26:04.212510    4322 out.go:309] Setting ErrFile to fd 2...
	I0821 04:26:04.212512    4322 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 04:26:04.212626    4322 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17102-920/.minikube/bin
	I0821 04:26:04.213620    4322 out.go:303] Setting JSON to false
	I0821 04:26:04.228693    4322 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3338,"bootTime":1692613826,"procs":421,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0821 04:26:04.228763    4322 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0821 04:26:04.233889    4322 out.go:177] * [docker-flags-681000] minikube v1.31.2 on Darwin 13.5 (arm64)
	I0821 04:26:04.241889    4322 out.go:177]   - MINIKUBE_LOCATION=17102
	I0821 04:26:04.241947    4322 notify.go:220] Checking for updates...
	I0821 04:26:04.245870    4322 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17102-920/kubeconfig
	I0821 04:26:04.248769    4322 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0821 04:26:04.251820    4322 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0821 04:26:04.254846    4322 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17102-920/.minikube
	I0821 04:26:04.257844    4322 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0821 04:26:04.261143    4322 config.go:182] Loaded profile config "force-systemd-flag-740000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0821 04:26:04.261206    4322 config.go:182] Loaded profile config "multinode-806000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0821 04:26:04.261248    4322 driver.go:373] Setting default libvirt URI to qemu:///system
	I0821 04:26:04.265909    4322 out.go:177] * Using the qemu2 driver based on user configuration
	I0821 04:26:04.272780    4322 start.go:298] selected driver: qemu2
	I0821 04:26:04.272790    4322 start.go:902] validating driver "qemu2" against <nil>
	I0821 04:26:04.272797    4322 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0821 04:26:04.274712    4322 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0821 04:26:04.278835    4322 out.go:177] * Automatically selected the socket_vmnet network
	I0821 04:26:04.281819    4322 start_flags.go:914] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0821 04:26:04.281845    4322 cni.go:84] Creating CNI manager for ""
	I0821 04:26:04.281851    4322 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0821 04:26:04.281856    4322 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0821 04:26:04.281862    4322 start_flags.go:319] config:
	{Name:docker-flags-681000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:docker-flags-681000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0}
	I0821 04:26:04.286006    4322 iso.go:125] acquiring lock: {Name:mk813ea611542195bb0511881888be3fabc72ff6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 04:26:04.290090    4322 out.go:177] * Starting control plane node docker-flags-681000 in cluster docker-flags-681000
	I0821 04:26:04.293824    4322 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime docker
	I0821 04:26:04.293841    4322 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17102-920/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4
	I0821 04:26:04.293878    4322 cache.go:57] Caching tarball of preloaded images
	I0821 04:26:04.293966    4322 preload.go:174] Found /Users/jenkins/minikube-integration/17102-920/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0821 04:26:04.293972    4322 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.4 on docker
	I0821 04:26:04.294058    4322 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/docker-flags-681000/config.json ...
	I0821 04:26:04.294070    4322 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/docker-flags-681000/config.json: {Name:mk24bb03eabb4a207603f386cc05078005c2b61e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 04:26:04.294279    4322 start.go:365] acquiring machines lock for docker-flags-681000: {Name:mk9b32d9fe994be32d77812db464b2cfa7bfb400 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0821 04:26:04.294308    4322 start.go:369] acquired machines lock for "docker-flags-681000" in 23.75µs
	I0821 04:26:04.294319    4322 start.go:93] Provisioning new machine with config: &{Name:docker-flags-681000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.27.4 ClusterName:docker-flags-681000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0821 04:26:04.294351    4322 start.go:125] createHost starting for "" (driver="qemu2")
	I0821 04:26:04.301816    4322 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0821 04:26:04.317792    4322 start.go:159] libmachine.API.Create for "docker-flags-681000" (driver="qemu2")
	I0821 04:26:04.317818    4322 client.go:168] LocalClient.Create starting
	I0821 04:26:04.317877    4322 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca.pem
	I0821 04:26:04.317907    4322 main.go:141] libmachine: Decoding PEM data...
	I0821 04:26:04.317916    4322 main.go:141] libmachine: Parsing certificate...
	I0821 04:26:04.317957    4322 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17102-920/.minikube/certs/cert.pem
	I0821 04:26:04.317977    4322 main.go:141] libmachine: Decoding PEM data...
	I0821 04:26:04.317984    4322 main.go:141] libmachine: Parsing certificate...
	I0821 04:26:04.318304    4322 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17102-920/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17102-920/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0821 04:26:04.436831    4322 main.go:141] libmachine: Creating SSH key...
	I0821 04:26:04.487140    4322 main.go:141] libmachine: Creating Disk image...
	I0821 04:26:04.487146    4322 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0821 04:26:04.487300    4322 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17102-920/.minikube/machines/docker-flags-681000/disk.qcow2.raw /Users/jenkins/minikube-integration/17102-920/.minikube/machines/docker-flags-681000/disk.qcow2
	I0821 04:26:04.495606    4322 main.go:141] libmachine: STDOUT: 
	I0821 04:26:04.495621    4322 main.go:141] libmachine: STDERR: 
	I0821 04:26:04.495668    4322 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17102-920/.minikube/machines/docker-flags-681000/disk.qcow2 +20000M
	I0821 04:26:04.502909    4322 main.go:141] libmachine: STDOUT: Image resized.
	
	I0821 04:26:04.502932    4322 main.go:141] libmachine: STDERR: 
	I0821 04:26:04.502960    4322 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17102-920/.minikube/machines/docker-flags-681000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17102-920/.minikube/machines/docker-flags-681000/disk.qcow2
	I0821 04:26:04.502965    4322 main.go:141] libmachine: Starting QEMU VM...
	I0821 04:26:04.503008    4322 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17102-920/.minikube/machines/docker-flags-681000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/docker-flags-681000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17102-920/.minikube/machines/docker-flags-681000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:98:b8:f0:ac:fd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17102-920/.minikube/machines/docker-flags-681000/disk.qcow2
	I0821 04:26:04.504577    4322 main.go:141] libmachine: STDOUT: 
	I0821 04:26:04.504590    4322 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0821 04:26:04.504620    4322 client.go:171] LocalClient.Create took 186.79725ms
	I0821 04:26:06.506730    4322 start.go:128] duration metric: createHost completed in 2.21240625s
	I0821 04:26:06.506791    4322 start.go:83] releasing machines lock for "docker-flags-681000", held for 2.21252075s
	W0821 04:26:06.506847    4322 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0821 04:26:06.527959    4322 out.go:177] * Deleting "docker-flags-681000" in qemu2 ...
	W0821 04:26:06.544825    4322 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0821 04:26:06.544845    4322 start.go:687] Will try again in 5 seconds ...
	I0821 04:26:11.547026    4322 start.go:365] acquiring machines lock for docker-flags-681000: {Name:mk9b32d9fe994be32d77812db464b2cfa7bfb400 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0821 04:26:11.547425    4322 start.go:369] acquired machines lock for "docker-flags-681000" in 286.375µs
	I0821 04:26:11.547559    4322 start.go:93] Provisioning new machine with config: &{Name:docker-flags-681000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.27.4 ClusterName:docker-flags-681000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0821 04:26:11.547874    4322 start.go:125] createHost starting for "" (driver="qemu2")
	I0821 04:26:11.557160    4322 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0821 04:26:11.603965    4322 start.go:159] libmachine.API.Create for "docker-flags-681000" (driver="qemu2")
	I0821 04:26:11.604008    4322 client.go:168] LocalClient.Create starting
	I0821 04:26:11.604200    4322 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca.pem
	I0821 04:26:11.604279    4322 main.go:141] libmachine: Decoding PEM data...
	I0821 04:26:11.604301    4322 main.go:141] libmachine: Parsing certificate...
	I0821 04:26:11.604381    4322 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17102-920/.minikube/certs/cert.pem
	I0821 04:26:11.604423    4322 main.go:141] libmachine: Decoding PEM data...
	I0821 04:26:11.604438    4322 main.go:141] libmachine: Parsing certificate...
	I0821 04:26:11.605202    4322 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17102-920/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17102-920/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0821 04:26:11.741257    4322 main.go:141] libmachine: Creating SSH key...
	I0821 04:26:11.811686    4322 main.go:141] libmachine: Creating Disk image...
	I0821 04:26:11.811691    4322 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0821 04:26:11.811829    4322 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17102-920/.minikube/machines/docker-flags-681000/disk.qcow2.raw /Users/jenkins/minikube-integration/17102-920/.minikube/machines/docker-flags-681000/disk.qcow2
	I0821 04:26:11.820238    4322 main.go:141] libmachine: STDOUT: 
	I0821 04:26:11.820253    4322 main.go:141] libmachine: STDERR: 
	I0821 04:26:11.820297    4322 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17102-920/.minikube/machines/docker-flags-681000/disk.qcow2 +20000M
	I0821 04:26:11.827372    4322 main.go:141] libmachine: STDOUT: Image resized.
	
	I0821 04:26:11.827383    4322 main.go:141] libmachine: STDERR: 
	I0821 04:26:11.827396    4322 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17102-920/.minikube/machines/docker-flags-681000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17102-920/.minikube/machines/docker-flags-681000/disk.qcow2
	I0821 04:26:11.827400    4322 main.go:141] libmachine: Starting QEMU VM...
	I0821 04:26:11.827437    4322 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17102-920/.minikube/machines/docker-flags-681000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/docker-flags-681000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17102-920/.minikube/machines/docker-flags-681000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:6b:4e:95:f8:ac -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17102-920/.minikube/machines/docker-flags-681000/disk.qcow2
	I0821 04:26:11.828905    4322 main.go:141] libmachine: STDOUT: 
	I0821 04:26:11.828919    4322 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0821 04:26:11.828932    4322 client.go:171] LocalClient.Create took 224.922709ms
	I0821 04:26:13.831133    4322 start.go:128] duration metric: createHost completed in 2.283269708s
	I0821 04:26:13.831204    4322 start.go:83] releasing machines lock for "docker-flags-681000", held for 2.283800708s
	W0821 04:26:13.831676    4322 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-681000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-681000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0821 04:26:13.842311    4322 out.go:177] 
	W0821 04:26:13.846395    4322 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0821 04:26:13.846439    4322 out.go:239] * 
	* 
	W0821 04:26:13.849161    4322 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0821 04:26:13.859373    4322 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-681000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-681000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-681000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 89 (78.612166ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p docker-flags-681000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-681000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 89
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p docker-flags-681000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p docker-flags-681000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-681000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-681000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 89 (44.360458ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p docker-flags-681000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-681000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 89
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-681000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p docker-flags-681000\"\n"
panic.go:522: *** TestDockerFlags FAILED at 2023-08-21 04:26:13.997395 -0700 PDT m=+3179.009924543
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-681000 -n docker-flags-681000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-681000 -n docker-flags-681000: exit status 7 (29.085667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-681000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-681000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-681000
--- FAIL: TestDockerFlags (9.94s)

                                                
                                    
x
+
TestForceSystemdFlag (10.51s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-740000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
E0821 04:26:01.106179    1362 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/ingress-addon-legacy-717000/client.crt: no such file or directory
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-740000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.2993725s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-740000] minikube v1.31.2 on Darwin 13.5 (arm64)
	  - MINIKUBE_LOCATION=17102
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17102-920/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17102-920/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node force-systemd-flag-740000 in cluster force-systemd-flag-740000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-740000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0821 04:25:58.550530    4298 out.go:296] Setting OutFile to fd 1 ...
	I0821 04:25:58.550658    4298 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 04:25:58.550661    4298 out.go:309] Setting ErrFile to fd 2...
	I0821 04:25:58.550663    4298 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 04:25:58.550792    4298 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17102-920/.minikube/bin
	I0821 04:25:58.551741    4298 out.go:303] Setting JSON to false
	I0821 04:25:58.566565    4298 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3332,"bootTime":1692613826,"procs":424,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0821 04:25:58.566636    4298 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0821 04:25:58.572297    4298 out.go:177] * [force-systemd-flag-740000] minikube v1.31.2 on Darwin 13.5 (arm64)
	I0821 04:25:58.579341    4298 out.go:177]   - MINIKUBE_LOCATION=17102
	I0821 04:25:58.579348    4298 notify.go:220] Checking for updates...
	I0821 04:25:58.587241    4298 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17102-920/kubeconfig
	I0821 04:25:58.591133    4298 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0821 04:25:58.594223    4298 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0821 04:25:58.597272    4298 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17102-920/.minikube
	I0821 04:25:58.600311    4298 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0821 04:25:58.603532    4298 config.go:182] Loaded profile config "force-systemd-env-889000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0821 04:25:58.603599    4298 config.go:182] Loaded profile config "multinode-806000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0821 04:25:58.603642    4298 driver.go:373] Setting default libvirt URI to qemu:///system
	I0821 04:25:58.608259    4298 out.go:177] * Using the qemu2 driver based on user configuration
	I0821 04:25:58.615163    4298 start.go:298] selected driver: qemu2
	I0821 04:25:58.615175    4298 start.go:902] validating driver "qemu2" against <nil>
	I0821 04:25:58.615187    4298 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0821 04:25:58.617113    4298 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0821 04:25:58.620232    4298 out.go:177] * Automatically selected the socket_vmnet network
	I0821 04:25:58.623205    4298 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0821 04:25:58.623220    4298 cni.go:84] Creating CNI manager for ""
	I0821 04:25:58.623225    4298 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0821 04:25:58.623229    4298 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0821 04:25:58.623236    4298 start_flags.go:319] config:
	{Name:force-systemd-flag-740000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:force-systemd-flag-740000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0821 04:25:58.627310    4298 iso.go:125] acquiring lock: {Name:mk813ea611542195bb0511881888be3fabc72ff6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 04:25:58.634242    4298 out.go:177] * Starting control plane node force-systemd-flag-740000 in cluster force-systemd-flag-740000
	I0821 04:25:58.638196    4298 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime docker
	I0821 04:25:58.638215    4298 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17102-920/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4
	I0821 04:25:58.638228    4298 cache.go:57] Caching tarball of preloaded images
	I0821 04:25:58.638296    4298 preload.go:174] Found /Users/jenkins/minikube-integration/17102-920/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0821 04:25:58.638301    4298 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.4 on docker
	I0821 04:25:58.638362    4298 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/force-systemd-flag-740000/config.json ...
	I0821 04:25:58.638375    4298 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/force-systemd-flag-740000/config.json: {Name:mk98ef2e14bb71384641dcd8a9ce18776b644b91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 04:25:58.638589    4298 start.go:365] acquiring machines lock for force-systemd-flag-740000: {Name:mk9b32d9fe994be32d77812db464b2cfa7bfb400 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0821 04:25:58.638619    4298 start.go:369] acquired machines lock for "force-systemd-flag-740000" in 23.5µs
	I0821 04:25:58.638630    4298 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-740000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4
ClusterName:force-systemd-flag-740000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0821 04:25:58.638675    4298 start.go:125] createHost starting for "" (driver="qemu2")
	I0821 04:25:58.646199    4298 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0821 04:25:58.661790    4298 start.go:159] libmachine.API.Create for "force-systemd-flag-740000" (driver="qemu2")
	I0821 04:25:58.661814    4298 client.go:168] LocalClient.Create starting
	I0821 04:25:58.661870    4298 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca.pem
	I0821 04:25:58.661895    4298 main.go:141] libmachine: Decoding PEM data...
	I0821 04:25:58.661908    4298 main.go:141] libmachine: Parsing certificate...
	I0821 04:25:58.661945    4298 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17102-920/.minikube/certs/cert.pem
	I0821 04:25:58.661962    4298 main.go:141] libmachine: Decoding PEM data...
	I0821 04:25:58.661972    4298 main.go:141] libmachine: Parsing certificate...
	I0821 04:25:58.662313    4298 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17102-920/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17102-920/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0821 04:25:58.782777    4298 main.go:141] libmachine: Creating SSH key...
	I0821 04:25:58.867962    4298 main.go:141] libmachine: Creating Disk image...
	I0821 04:25:58.867973    4298 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0821 04:25:58.868133    4298 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17102-920/.minikube/machines/force-systemd-flag-740000/disk.qcow2.raw /Users/jenkins/minikube-integration/17102-920/.minikube/machines/force-systemd-flag-740000/disk.qcow2
	I0821 04:25:58.876569    4298 main.go:141] libmachine: STDOUT: 
	I0821 04:25:58.876582    4298 main.go:141] libmachine: STDERR: 
	I0821 04:25:58.876630    4298 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17102-920/.minikube/machines/force-systemd-flag-740000/disk.qcow2 +20000M
	I0821 04:25:58.883717    4298 main.go:141] libmachine: STDOUT: Image resized.
	
	I0821 04:25:58.883729    4298 main.go:141] libmachine: STDERR: 
	I0821 04:25:58.883743    4298 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17102-920/.minikube/machines/force-systemd-flag-740000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17102-920/.minikube/machines/force-systemd-flag-740000/disk.qcow2
	I0821 04:25:58.883748    4298 main.go:141] libmachine: Starting QEMU VM...
	I0821 04:25:58.883785    4298 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17102-920/.minikube/machines/force-systemd-flag-740000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/force-systemd-flag-740000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17102-920/.minikube/machines/force-systemd-flag-740000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:9e:ab:20:c0:34 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17102-920/.minikube/machines/force-systemd-flag-740000/disk.qcow2
	I0821 04:25:58.885307    4298 main.go:141] libmachine: STDOUT: 
	I0821 04:25:58.885319    4298 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0821 04:25:58.885341    4298 client.go:171] LocalClient.Create took 223.520125ms
	I0821 04:26:00.887509    4298 start.go:128] duration metric: createHost completed in 2.248853167s
	I0821 04:26:00.887603    4298 start.go:83] releasing machines lock for "force-systemd-flag-740000", held for 2.249023708s
	W0821 04:26:00.887665    4298 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0821 04:26:00.895025    4298 out.go:177] * Deleting "force-systemd-flag-740000" in qemu2 ...
	W0821 04:26:00.916772    4298 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0821 04:26:00.916881    4298 start.go:687] Will try again in 5 seconds ...
	I0821 04:26:05.919006    4298 start.go:365] acquiring machines lock for force-systemd-flag-740000: {Name:mk9b32d9fe994be32d77812db464b2cfa7bfb400 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0821 04:26:06.506973    4298 start.go:369] acquired machines lock for "force-systemd-flag-740000" in 587.829625ms
	I0821 04:26:06.507066    4298 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-740000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4
ClusterName:force-systemd-flag-740000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0821 04:26:06.507309    4298 start.go:125] createHost starting for "" (driver="qemu2")
	I0821 04:26:06.518969    4298 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0821 04:26:06.565143    4298 start.go:159] libmachine.API.Create for "force-systemd-flag-740000" (driver="qemu2")
	I0821 04:26:06.565182    4298 client.go:168] LocalClient.Create starting
	I0821 04:26:06.565291    4298 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca.pem
	I0821 04:26:06.565345    4298 main.go:141] libmachine: Decoding PEM data...
	I0821 04:26:06.565363    4298 main.go:141] libmachine: Parsing certificate...
	I0821 04:26:06.565449    4298 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17102-920/.minikube/certs/cert.pem
	I0821 04:26:06.565503    4298 main.go:141] libmachine: Decoding PEM data...
	I0821 04:26:06.565514    4298 main.go:141] libmachine: Parsing certificate...
	I0821 04:26:06.565997    4298 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17102-920/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17102-920/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0821 04:26:06.699953    4298 main.go:141] libmachine: Creating SSH key...
	I0821 04:26:06.763481    4298 main.go:141] libmachine: Creating Disk image...
	I0821 04:26:06.763489    4298 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0821 04:26:06.763635    4298 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17102-920/.minikube/machines/force-systemd-flag-740000/disk.qcow2.raw /Users/jenkins/minikube-integration/17102-920/.minikube/machines/force-systemd-flag-740000/disk.qcow2
	I0821 04:26:06.772197    4298 main.go:141] libmachine: STDOUT: 
	I0821 04:26:06.772221    4298 main.go:141] libmachine: STDERR: 
	I0821 04:26:06.772295    4298 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17102-920/.minikube/machines/force-systemd-flag-740000/disk.qcow2 +20000M
	I0821 04:26:06.779564    4298 main.go:141] libmachine: STDOUT: Image resized.
	
	I0821 04:26:06.779577    4298 main.go:141] libmachine: STDERR: 
	I0821 04:26:06.779594    4298 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17102-920/.minikube/machines/force-systemd-flag-740000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17102-920/.minikube/machines/force-systemd-flag-740000/disk.qcow2
	I0821 04:26:06.779601    4298 main.go:141] libmachine: Starting QEMU VM...
	I0821 04:26:06.779646    4298 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17102-920/.minikube/machines/force-systemd-flag-740000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/force-systemd-flag-740000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17102-920/.minikube/machines/force-systemd-flag-740000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:78:92:41:7f:2b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17102-920/.minikube/machines/force-systemd-flag-740000/disk.qcow2
	I0821 04:26:06.781126    4298 main.go:141] libmachine: STDOUT: 
	I0821 04:26:06.781139    4298 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0821 04:26:06.781152    4298 client.go:171] LocalClient.Create took 215.9645ms
	I0821 04:26:08.783312    4298 start.go:128] duration metric: createHost completed in 2.276023791s
	I0821 04:26:08.783363    4298 start.go:83] releasing machines lock for "force-systemd-flag-740000", held for 2.2764035s
	W0821 04:26:08.783770    4298 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-740000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-740000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0821 04:26:08.794485    4298 out.go:177] 
	W0821 04:26:08.798440    4298 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0821 04:26:08.798496    4298 out.go:239] * 
	* 
	W0821 04:26:08.801021    4298 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0821 04:26:08.810333    4298 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-740000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-740000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-740000 ssh "docker info --format {{.CgroupDriver}}": exit status 89 (75.114709ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p force-systemd-flag-740000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-740000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 89
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2023-08-21 04:26:08.901498 -0700 PDT m=+3173.913923501
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-740000 -n force-systemd-flag-740000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-740000 -n force-systemd-flag-740000: exit status 7 (34.548ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-740000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-740000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-740000
--- FAIL: TestForceSystemdFlag (10.51s)

                                                
                                    
x
+
TestForceSystemdEnv (9.97s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-889000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-889000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.761843333s)

                                                
                                                
-- stdout --
	* [force-systemd-env-889000] minikube v1.31.2 on Darwin 13.5 (arm64)
	  - MINIKUBE_LOCATION=17102
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17102-920/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17102-920/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node force-systemd-env-889000 in cluster force-systemd-env-889000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-889000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0821 04:25:54.244930    4266 out.go:296] Setting OutFile to fd 1 ...
	I0821 04:25:54.245053    4266 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 04:25:54.245056    4266 out.go:309] Setting ErrFile to fd 2...
	I0821 04:25:54.245059    4266 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 04:25:54.245187    4266 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17102-920/.minikube/bin
	I0821 04:25:54.246214    4266 out.go:303] Setting JSON to false
	I0821 04:25:54.261739    4266 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3328,"bootTime":1692613826,"procs":423,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0821 04:25:54.261808    4266 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0821 04:25:54.267166    4266 out.go:177] * [force-systemd-env-889000] minikube v1.31.2 on Darwin 13.5 (arm64)
	I0821 04:25:54.277075    4266 out.go:177]   - MINIKUBE_LOCATION=17102
	I0821 04:25:54.272700    4266 notify.go:220] Checking for updates...
	I0821 04:25:54.289120    4266 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17102-920/kubeconfig
	I0821 04:25:54.297130    4266 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0821 04:25:54.305161    4266 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0821 04:25:54.313173    4266 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17102-920/.minikube
	I0821 04:25:54.321158    4266 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0821 04:25:54.325403    4266 config.go:182] Loaded profile config "multinode-806000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0821 04:25:54.325444    4266 driver.go:373] Setting default libvirt URI to qemu:///system
	I0821 04:25:54.330223    4266 out.go:177] * Using the qemu2 driver based on user configuration
	I0821 04:25:54.337191    4266 start.go:298] selected driver: qemu2
	I0821 04:25:54.337196    4266 start.go:902] validating driver "qemu2" against <nil>
	I0821 04:25:54.337202    4266 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0821 04:25:54.339313    4266 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0821 04:25:54.343153    4266 out.go:177] * Automatically selected the socket_vmnet network
	I0821 04:25:54.347230    4266 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0821 04:25:54.347248    4266 cni.go:84] Creating CNI manager for ""
	I0821 04:25:54.347256    4266 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0821 04:25:54.347261    4266 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0821 04:25:54.347267    4266 start_flags.go:319] config:
	{Name:force-systemd-env-889000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:force-systemd-env-889000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0821 04:25:54.351467    4266 iso.go:125] acquiring lock: {Name:mk813ea611542195bb0511881888be3fabc72ff6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 04:25:54.355205    4266 out.go:177] * Starting control plane node force-systemd-env-889000 in cluster force-systemd-env-889000
	I0821 04:25:54.363102    4266 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime docker
	I0821 04:25:54.363129    4266 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17102-920/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4
	I0821 04:25:54.363139    4266 cache.go:57] Caching tarball of preloaded images
	I0821 04:25:54.363202    4266 preload.go:174] Found /Users/jenkins/minikube-integration/17102-920/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0821 04:25:54.363208    4266 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.4 on docker
	I0821 04:25:54.363284    4266 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/force-systemd-env-889000/config.json ...
	I0821 04:25:54.363297    4266 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/force-systemd-env-889000/config.json: {Name:mk3810f379aeb7368271299b5aa2616d47febc2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 04:25:54.363488    4266 start.go:365] acquiring machines lock for force-systemd-env-889000: {Name:mk9b32d9fe994be32d77812db464b2cfa7bfb400 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0821 04:25:54.363517    4266 start.go:369] acquired machines lock for "force-systemd-env-889000" in 23.917µs
	I0821 04:25:54.363529    4266 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-889000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 C
lusterName:force-systemd-env-889000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0821 04:25:54.363567    4266 start.go:125] createHost starting for "" (driver="qemu2")
	I0821 04:25:54.368143    4266 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0821 04:25:54.383685    4266 start.go:159] libmachine.API.Create for "force-systemd-env-889000" (driver="qemu2")
	I0821 04:25:54.383716    4266 client.go:168] LocalClient.Create starting
	I0821 04:25:54.383790    4266 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca.pem
	I0821 04:25:54.383813    4266 main.go:141] libmachine: Decoding PEM data...
	I0821 04:25:54.383826    4266 main.go:141] libmachine: Parsing certificate...
	I0821 04:25:54.383866    4266 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17102-920/.minikube/certs/cert.pem
	I0821 04:25:54.383891    4266 main.go:141] libmachine: Decoding PEM data...
	I0821 04:25:54.383898    4266 main.go:141] libmachine: Parsing certificate...
	I0821 04:25:54.384227    4266 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17102-920/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17102-920/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0821 04:25:54.506039    4266 main.go:141] libmachine: Creating SSH key...
	I0821 04:25:54.609067    4266 main.go:141] libmachine: Creating Disk image...
	I0821 04:25:54.609074    4266 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0821 04:25:54.609208    4266 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17102-920/.minikube/machines/force-systemd-env-889000/disk.qcow2.raw /Users/jenkins/minikube-integration/17102-920/.minikube/machines/force-systemd-env-889000/disk.qcow2
	I0821 04:25:54.618264    4266 main.go:141] libmachine: STDOUT: 
	I0821 04:25:54.618283    4266 main.go:141] libmachine: STDERR: 
	I0821 04:25:54.618363    4266 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17102-920/.minikube/machines/force-systemd-env-889000/disk.qcow2 +20000M
	I0821 04:25:54.625956    4266 main.go:141] libmachine: STDOUT: Image resized.
	
	I0821 04:25:54.625970    4266 main.go:141] libmachine: STDERR: 
	I0821 04:25:54.625996    4266 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17102-920/.minikube/machines/force-systemd-env-889000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17102-920/.minikube/machines/force-systemd-env-889000/disk.qcow2
	I0821 04:25:54.626007    4266 main.go:141] libmachine: Starting QEMU VM...
	I0821 04:25:54.626042    4266 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17102-920/.minikube/machines/force-systemd-env-889000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/force-systemd-env-889000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17102-920/.minikube/machines/force-systemd-env-889000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:26:c3:e8:de:ca -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17102-920/.minikube/machines/force-systemd-env-889000/disk.qcow2
	I0821 04:25:54.627596    4266 main.go:141] libmachine: STDOUT: 
	I0821 04:25:54.627610    4266 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0821 04:25:54.627630    4266 client.go:171] LocalClient.Create took 243.912583ms
	I0821 04:25:56.629822    4266 start.go:128] duration metric: createHost completed in 2.266276292s
	I0821 04:25:56.629902    4266 start.go:83] releasing machines lock for "force-systemd-env-889000", held for 2.26642725s
	W0821 04:25:56.629957    4266 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0821 04:25:56.641037    4266 out.go:177] * Deleting "force-systemd-env-889000" in qemu2 ...
	W0821 04:25:56.660409    4266 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0821 04:25:56.660436    4266 start.go:687] Will try again in 5 seconds ...
	I0821 04:26:01.662598    4266 start.go:365] acquiring machines lock for force-systemd-env-889000: {Name:mk9b32d9fe994be32d77812db464b2cfa7bfb400 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0821 04:26:01.663084    4266 start.go:369] acquired machines lock for "force-systemd-env-889000" in 365.708µs
	I0821 04:26:01.663215    4266 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-889000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 C
lusterName:force-systemd-env-889000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0821 04:26:01.663476    4266 start.go:125] createHost starting for "" (driver="qemu2")
	I0821 04:26:01.673059    4266 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0821 04:26:01.717711    4266 start.go:159] libmachine.API.Create for "force-systemd-env-889000" (driver="qemu2")
	I0821 04:26:01.717764    4266 client.go:168] LocalClient.Create starting
	I0821 04:26:01.717877    4266 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca.pem
	I0821 04:26:01.717927    4266 main.go:141] libmachine: Decoding PEM data...
	I0821 04:26:01.717945    4266 main.go:141] libmachine: Parsing certificate...
	I0821 04:26:01.718022    4266 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17102-920/.minikube/certs/cert.pem
	I0821 04:26:01.718059    4266 main.go:141] libmachine: Decoding PEM data...
	I0821 04:26:01.718068    4266 main.go:141] libmachine: Parsing certificate...
	I0821 04:26:01.718800    4266 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17102-920/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17102-920/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0821 04:26:01.849878    4266 main.go:141] libmachine: Creating SSH key...
	I0821 04:26:01.917541    4266 main.go:141] libmachine: Creating Disk image...
	I0821 04:26:01.917546    4266 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0821 04:26:01.917692    4266 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17102-920/.minikube/machines/force-systemd-env-889000/disk.qcow2.raw /Users/jenkins/minikube-integration/17102-920/.minikube/machines/force-systemd-env-889000/disk.qcow2
	I0821 04:26:01.926190    4266 main.go:141] libmachine: STDOUT: 
	I0821 04:26:01.926206    4266 main.go:141] libmachine: STDERR: 
	I0821 04:26:01.926280    4266 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17102-920/.minikube/machines/force-systemd-env-889000/disk.qcow2 +20000M
	I0821 04:26:01.933394    4266 main.go:141] libmachine: STDOUT: Image resized.
	
	I0821 04:26:01.933408    4266 main.go:141] libmachine: STDERR: 
	I0821 04:26:01.933422    4266 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17102-920/.minikube/machines/force-systemd-env-889000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17102-920/.minikube/machines/force-systemd-env-889000/disk.qcow2
	I0821 04:26:01.933429    4266 main.go:141] libmachine: Starting QEMU VM...
	I0821 04:26:01.933462    4266 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17102-920/.minikube/machines/force-systemd-env-889000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/force-systemd-env-889000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17102-920/.minikube/machines/force-systemd-env-889000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:17:4f:51:a3:77 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17102-920/.minikube/machines/force-systemd-env-889000/disk.qcow2
	I0821 04:26:01.935011    4266 main.go:141] libmachine: STDOUT: 
	I0821 04:26:01.935030    4266 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0821 04:26:01.935042    4266 client.go:171] LocalClient.Create took 217.275167ms
	I0821 04:26:03.937221    4266 start.go:128] duration metric: createHost completed in 2.273767292s
	I0821 04:26:03.937301    4266 start.go:83] releasing machines lock for "force-systemd-env-889000", held for 2.274242542s
	W0821 04:26:03.937717    4266 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-889000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-889000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0821 04:26:03.951511    4266 out.go:177] 
	W0821 04:26:03.955673    4266 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0821 04:26:03.955694    4266 out.go:239] * 
	* 
	W0821 04:26:03.958556    4266 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0821 04:26:03.966610    4266 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-889000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-889000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-889000 ssh "docker info --format {{.CgroupDriver}}": exit status 89 (74.111458ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p force-systemd-env-889000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-889000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 89
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2023-08-21 04:26:04.055895 -0700 PDT m=+3169.068220168
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-889000 -n force-systemd-env-889000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-889000 -n force-systemd-env-889000: exit status 7 (33.577375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-889000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-889000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-889000
--- FAIL: TestForceSystemdEnv (9.97s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (32.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1626: (dbg) Run:  kubectl --context functional-818000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-818000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-58d66798bb-j2x9r" [2c452360-3344-4683-8627-4ae2bbe7a380] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-58d66798bb-j2x9r" [2c452360-3344-4683-8627-4ae2bbe7a380] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.014243209s
functional_test.go:1648: (dbg) Run:  out/minikube-darwin-arm64 -p functional-818000 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.105.4:31810
functional_test.go:1660: error fetching http://192.168.105.4:31810: Get "http://192.168.105.4:31810": dial tcp 192.168.105.4:31810: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:31810: Get "http://192.168.105.4:31810": dial tcp 192.168.105.4:31810: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:31810: Get "http://192.168.105.4:31810": dial tcp 192.168.105.4:31810: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:31810: Get "http://192.168.105.4:31810": dial tcp 192.168.105.4:31810: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:31810: Get "http://192.168.105.4:31810": dial tcp 192.168.105.4:31810: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:31810: Get "http://192.168.105.4:31810": dial tcp 192.168.105.4:31810: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:31810: Get "http://192.168.105.4:31810": dial tcp 192.168.105.4:31810: connect: connection refused
functional_test.go:1680: failed to fetch http://192.168.105.4:31810: Get "http://192.168.105.4:31810": dial tcp 192.168.105.4:31810: connect: connection refused
functional_test.go:1597: service test failed - dumping debug information
functional_test.go:1598: -----------------------service failure post-mortem--------------------------------
functional_test.go:1601: (dbg) Run:  kubectl --context functional-818000 describe po hello-node-connect
functional_test.go:1605: hello-node pod describe:
Name:             hello-node-connect-58d66798bb-j2x9r
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-818000/192.168.105.4
Start Time:       Mon, 21 Aug 2023 04:17:05 -0700
Labels:           app=hello-node-connect
pod-template-hash=58d66798bb
Annotations:      <none>
Status:           Running
IP:               10.244.0.10
IPs:
IP:           10.244.0.10
Controlled By:  ReplicaSet/hello-node-connect-58d66798bb
Containers:
echoserver-arm:
Container ID:   docker://f8f4e6973c8f44f492533c96f99eb51ca0310187b2f78c701a22396b20f4a00f
Image:          registry.k8s.io/echoserver-arm:1.8
Image ID:       docker-pullable://registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       CrashLoopBackOff
Last State:     Terminated
Reason:       Error
Exit Code:    1
Started:      Mon, 21 Aug 2023 04:17:20 -0700
Finished:     Mon, 21 Aug 2023 04:17:20 -0700
Ready:          False
Restart Count:  2
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dckhk (ro)
Conditions:
Type              Status
Initialized       True 
Ready             False 
ContainersReady   False 
PodScheduled      True 
Volumes:
kube-api-access-dckhk:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                From               Message
----     ------     ----               ----               -------
Normal   Scheduled  31s                default-scheduler  Successfully assigned default/hello-node-connect-58d66798bb-j2x9r to functional-818000
Normal   Pulled     16s (x3 over 30s)  kubelet            Container image "registry.k8s.io/echoserver-arm:1.8" already present on machine
Normal   Created    16s (x3 over 30s)  kubelet            Created container echoserver-arm
Normal   Started    16s (x3 over 30s)  kubelet            Started container echoserver-arm
Warning  BackOff    2s (x3 over 29s)   kubelet            Back-off restarting failed container echoserver-arm in pod hello-node-connect-58d66798bb-j2x9r_default(2c452360-3344-4683-8627-4ae2bbe7a380)

                                                
                                                
functional_test.go:1607: (dbg) Run:  kubectl --context functional-818000 logs -l app=hello-node-connect
functional_test.go:1611: hello-node logs:
exec /usr/sbin/nginx: exec format error
functional_test.go:1613: (dbg) Run:  kubectl --context functional-818000 describe svc hello-node-connect
functional_test.go:1617: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.106.81.98
IPs:                      10.106.81.98
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  31810/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-818000 -n functional-818000
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p functional-818000 logs -n 25
helpers_test.go:252: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |                                                         Args                                                         |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| addons  | functional-818000 addons list                                                                                        | functional-818000 | jenkins | v1.31.2 | 21 Aug 23 04:17 PDT | 21 Aug 23 04:17 PDT |
	| addons  | functional-818000 addons list                                                                                        | functional-818000 | jenkins | v1.31.2 | 21 Aug 23 04:17 PDT | 21 Aug 23 04:17 PDT |
	|         | -o json                                                                                                              |                   |         |         |                     |                     |
	| service | functional-818000 service                                                                                            | functional-818000 | jenkins | v1.31.2 | 21 Aug 23 04:17 PDT | 21 Aug 23 04:17 PDT |
	|         | hello-node-connect --url                                                                                             |                   |         |         |                     |                     |
	| ssh     | functional-818000 ssh findmnt                                                                                        | functional-818000 | jenkins | v1.31.2 | 21 Aug 23 04:17 PDT |                     |
	|         | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| mount   | -p functional-818000                                                                                                 | functional-818000 | jenkins | v1.31.2 | 21 Aug 23 04:17 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port203537942/001:/mount-9p       |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| ssh     | functional-818000 ssh findmnt                                                                                        | functional-818000 | jenkins | v1.31.2 | 21 Aug 23 04:17 PDT | 21 Aug 23 04:17 PDT |
	|         | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh     | functional-818000 ssh -- ls                                                                                          | functional-818000 | jenkins | v1.31.2 | 21 Aug 23 04:17 PDT | 21 Aug 23 04:17 PDT |
	|         | -la /mount-9p                                                                                                        |                   |         |         |                     |                     |
	| ssh     | functional-818000 ssh cat                                                                                            | functional-818000 | jenkins | v1.31.2 | 21 Aug 23 04:17 PDT | 21 Aug 23 04:17 PDT |
	|         | /mount-9p/test-1692616643984220000                                                                                   |                   |         |         |                     |                     |
	| ssh     | functional-818000 ssh stat                                                                                           | functional-818000 | jenkins | v1.31.2 | 21 Aug 23 04:17 PDT | 21 Aug 23 04:17 PDT |
	|         | /mount-9p/created-by-test                                                                                            |                   |         |         |                     |                     |
	| ssh     | functional-818000 ssh stat                                                                                           | functional-818000 | jenkins | v1.31.2 | 21 Aug 23 04:17 PDT | 21 Aug 23 04:17 PDT |
	|         | /mount-9p/created-by-pod                                                                                             |                   |         |         |                     |                     |
	| ssh     | functional-818000 ssh sudo                                                                                           | functional-818000 | jenkins | v1.31.2 | 21 Aug 23 04:17 PDT | 21 Aug 23 04:17 PDT |
	|         | umount -f /mount-9p                                                                                                  |                   |         |         |                     |                     |
	| ssh     | functional-818000 ssh findmnt                                                                                        | functional-818000 | jenkins | v1.31.2 | 21 Aug 23 04:17 PDT |                     |
	|         | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| mount   | -p functional-818000                                                                                                 | functional-818000 | jenkins | v1.31.2 | 21 Aug 23 04:17 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port3205394019/001:/mount-9p |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1 --port 46464                                                                                  |                   |         |         |                     |                     |
	| ssh     | functional-818000 ssh findmnt                                                                                        | functional-818000 | jenkins | v1.31.2 | 21 Aug 23 04:17 PDT | 21 Aug 23 04:17 PDT |
	|         | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh     | functional-818000 ssh -- ls                                                                                          | functional-818000 | jenkins | v1.31.2 | 21 Aug 23 04:17 PDT | 21 Aug 23 04:17 PDT |
	|         | -la /mount-9p                                                                                                        |                   |         |         |                     |                     |
	| ssh     | functional-818000 ssh sudo                                                                                           | functional-818000 | jenkins | v1.31.2 | 21 Aug 23 04:17 PDT |                     |
	|         | umount -f /mount-9p                                                                                                  |                   |         |         |                     |                     |
	| mount   | -p functional-818000                                                                                                 | functional-818000 | jenkins | v1.31.2 | 21 Aug 23 04:17 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup178691038/001:/mount2    |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| mount   | -p functional-818000                                                                                                 | functional-818000 | jenkins | v1.31.2 | 21 Aug 23 04:17 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup178691038/001:/mount3    |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| mount   | -p functional-818000                                                                                                 | functional-818000 | jenkins | v1.31.2 | 21 Aug 23 04:17 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup178691038/001:/mount1    |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| ssh     | functional-818000 ssh findmnt                                                                                        | functional-818000 | jenkins | v1.31.2 | 21 Aug 23 04:17 PDT |                     |
	|         | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh     | functional-818000 ssh findmnt                                                                                        | functional-818000 | jenkins | v1.31.2 | 21 Aug 23 04:17 PDT |                     |
	|         | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh     | functional-818000 ssh findmnt                                                                                        | functional-818000 | jenkins | v1.31.2 | 21 Aug 23 04:17 PDT |                     |
	|         | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh     | functional-818000 ssh findmnt                                                                                        | functional-818000 | jenkins | v1.31.2 | 21 Aug 23 04:17 PDT |                     |
	|         | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh     | functional-818000 ssh findmnt                                                                                        | functional-818000 | jenkins | v1.31.2 | 21 Aug 23 04:17 PDT |                     |
	|         | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh     | functional-818000 ssh findmnt                                                                                        | functional-818000 | jenkins | v1.31.2 | 21 Aug 23 04:17 PDT |                     |
	|         | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/21 04:16:02
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.20.7 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0821 04:16:02.321209    2960 out.go:296] Setting OutFile to fd 1 ...
	I0821 04:16:02.321318    2960 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 04:16:02.321319    2960 out.go:309] Setting ErrFile to fd 2...
	I0821 04:16:02.321321    2960 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 04:16:02.321424    2960 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17102-920/.minikube/bin
	I0821 04:16:02.322360    2960 out.go:303] Setting JSON to false
	I0821 04:16:02.337704    2960 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2736,"bootTime":1692613826,"procs":412,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0821 04:16:02.337767    2960 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0821 04:16:02.341562    2960 out.go:177] * [functional-818000] minikube v1.31.2 on Darwin 13.5 (arm64)
	I0821 04:16:02.352621    2960 out.go:177]   - MINIKUBE_LOCATION=17102
	I0821 04:16:02.356565    2960 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17102-920/kubeconfig
	I0821 04:16:02.352689    2960 notify.go:220] Checking for updates...
	I0821 04:16:02.363564    2960 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0821 04:16:02.366610    2960 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0821 04:16:02.369555    2960 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17102-920/.minikube
	I0821 04:16:02.372535    2960 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0821 04:16:02.375921    2960 config.go:182] Loaded profile config "functional-818000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0821 04:16:02.375969    2960 driver.go:373] Setting default libvirt URI to qemu:///system
	I0821 04:16:02.379544    2960 out.go:177] * Using the qemu2 driver based on existing profile
	I0821 04:16:02.386544    2960 start.go:298] selected driver: qemu2
	I0821 04:16:02.386547    2960 start.go:902] validating driver "qemu2" against &{Name:functional-818000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName
:functional-818000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:doc
ker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0821 04:16:02.386603    2960 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0821 04:16:02.388659    2960 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0821 04:16:02.388682    2960 cni.go:84] Creating CNI manager for ""
	I0821 04:16:02.388687    2960 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0821 04:16:02.388692    2960 start_flags.go:319] config:
	{Name:functional-818000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:functional-818000 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:doc
ker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0821 04:16:02.392724    2960 iso.go:125] acquiring lock: {Name:mk813ea611542195bb0511881888be3fabc72ff6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 04:16:02.396498    2960 out.go:177] * Starting control plane node functional-818000 in cluster functional-818000
	I0821 04:16:02.404602    2960 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime docker
	I0821 04:16:02.404620    2960 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17102-920/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4
	I0821 04:16:02.404632    2960 cache.go:57] Caching tarball of preloaded images
	I0821 04:16:02.404693    2960 preload.go:174] Found /Users/jenkins/minikube-integration/17102-920/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0821 04:16:02.404697    2960 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.4 on docker
	I0821 04:16:02.404774    2960 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/functional-818000/config.json ...
	I0821 04:16:02.405010    2960 start.go:365] acquiring machines lock for functional-818000: {Name:mk9b32d9fe994be32d77812db464b2cfa7bfb400 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0821 04:16:02.405037    2960 start.go:369] acquired machines lock for "functional-818000" in 23.083µs
	I0821 04:16:02.405044    2960 start.go:96] Skipping create...Using existing machine configuration
	I0821 04:16:02.405047    2960 fix.go:54] fixHost starting: 
	I0821 04:16:02.405680    2960 fix.go:102] recreateIfNeeded on functional-818000: state=Running err=<nil>
	W0821 04:16:02.405688    2960 fix.go:128] unexpected machine state, will restart: <nil>
	I0821 04:16:02.409600    2960 out.go:177] * Updating the running qemu2 "functional-818000" VM ...
	I0821 04:16:02.416520    2960 machine.go:88] provisioning docker machine ...
	I0821 04:16:02.416528    2960 buildroot.go:166] provisioning hostname "functional-818000"
	I0821 04:16:02.416558    2960 main.go:141] libmachine: Using SSH client type: native
	I0821 04:16:02.416824    2960 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1004621e0] 0x100464c40 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0821 04:16:02.416829    2960 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-818000 && echo "functional-818000" | sudo tee /etc/hostname
	I0821 04:16:02.469227    2960 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-818000
	
	I0821 04:16:02.469271    2960 main.go:141] libmachine: Using SSH client type: native
	I0821 04:16:02.469523    2960 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1004621e0] 0x100464c40 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0821 04:16:02.469530    2960 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-818000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-818000/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-818000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0821 04:16:02.520920    2960 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0821 04:16:02.520928    2960 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17102-920/.minikube CaCertPath:/Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17102-920/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17102-920/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17102-920/.minikube}
	I0821 04:16:02.520934    2960 buildroot.go:174] setting up certificates
	I0821 04:16:02.520942    2960 provision.go:83] configureAuth start
	I0821 04:16:02.520945    2960 provision.go:138] copyHostCerts
	I0821 04:16:02.521000    2960 exec_runner.go:144] found /Users/jenkins/minikube-integration/17102-920/.minikube/ca.pem, removing ...
	I0821 04:16:02.521009    2960 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17102-920/.minikube/ca.pem
	I0821 04:16:02.521104    2960 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17102-920/.minikube/ca.pem (1078 bytes)
	I0821 04:16:02.521288    2960 exec_runner.go:144] found /Users/jenkins/minikube-integration/17102-920/.minikube/cert.pem, removing ...
	I0821 04:16:02.521290    2960 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17102-920/.minikube/cert.pem
	I0821 04:16:02.521330    2960 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17102-920/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17102-920/.minikube/cert.pem (1123 bytes)
	I0821 04:16:02.521421    2960 exec_runner.go:144] found /Users/jenkins/minikube-integration/17102-920/.minikube/key.pem, removing ...
	I0821 04:16:02.521422    2960 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17102-920/.minikube/key.pem
	I0821 04:16:02.521458    2960 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17102-920/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17102-920/.minikube/key.pem (1679 bytes)
	I0821 04:16:02.521523    2960 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17102-920/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca-key.pem org=jenkins.functional-818000 san=[192.168.105.4 192.168.105.4 localhost 127.0.0.1 minikube functional-818000]
	I0821 04:16:02.642834    2960 provision.go:172] copyRemoteCerts
	I0821 04:16:02.642887    2960 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0821 04:16:02.642893    2960 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/functional-818000/id_rsa Username:docker}
	I0821 04:16:02.673961    2960 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0821 04:16:02.681438    2960 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0821 04:16:02.689168    2960 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0821 04:16:02.696717    2960 provision.go:86] duration metric: configureAuth took 175.768167ms
	I0821 04:16:02.696723    2960 buildroot.go:189] setting minikube options for container-runtime
	I0821 04:16:02.696839    2960 config.go:182] Loaded profile config "functional-818000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0821 04:16:02.696876    2960 main.go:141] libmachine: Using SSH client type: native
	I0821 04:16:02.697098    2960 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1004621e0] 0x100464c40 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0821 04:16:02.697101    2960 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0821 04:16:02.755965    2960 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0821 04:16:02.755971    2960 buildroot.go:70] root file system type: tmpfs
	I0821 04:16:02.756024    2960 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0821 04:16:02.756081    2960 main.go:141] libmachine: Using SSH client type: native
	I0821 04:16:02.756323    2960 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1004621e0] 0x100464c40 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0821 04:16:02.756360    2960 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0821 04:16:02.811419    2960 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0821 04:16:02.811475    2960 main.go:141] libmachine: Using SSH client type: native
	I0821 04:16:02.811713    2960 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1004621e0] 0x100464c40 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0821 04:16:02.811720    2960 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0821 04:16:02.865315    2960 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0821 04:16:02.865322    2960 machine.go:91] provisioned docker machine in 448.802542ms
	I0821 04:16:02.865326    2960 start.go:300] post-start starting for "functional-818000" (driver="qemu2")
	I0821 04:16:02.865331    2960 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0821 04:16:02.865373    2960 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0821 04:16:02.865379    2960 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/functional-818000/id_rsa Username:docker}
	I0821 04:16:02.893525    2960 ssh_runner.go:195] Run: cat /etc/os-release
	I0821 04:16:02.894997    2960 info.go:137] Remote host: Buildroot 2021.02.12
	I0821 04:16:02.895005    2960 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17102-920/.minikube/addons for local assets ...
	I0821 04:16:02.895064    2960 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17102-920/.minikube/files for local assets ...
	I0821 04:16:02.895168    2960 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17102-920/.minikube/files/etc/ssl/certs/13622.pem -> 13622.pem in /etc/ssl/certs
	I0821 04:16:02.895272    2960 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17102-920/.minikube/files/etc/test/nested/copy/1362/hosts -> hosts in /etc/test/nested/copy/1362
	I0821 04:16:02.895305    2960 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1362
	I0821 04:16:02.898157    2960 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/files/etc/ssl/certs/13622.pem --> /etc/ssl/certs/13622.pem (1708 bytes)
	I0821 04:16:02.904929    2960 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/files/etc/test/nested/copy/1362/hosts --> /etc/test/nested/copy/1362/hosts (40 bytes)
	I0821 04:16:02.912069    2960 start.go:303] post-start completed in 46.7385ms
	I0821 04:16:02.912074    2960 fix.go:56] fixHost completed within 507.032291ms
	I0821 04:16:02.912117    2960 main.go:141] libmachine: Using SSH client type: native
	I0821 04:16:02.912342    2960 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1004621e0] 0x100464c40 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0821 04:16:02.912345    2960 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0821 04:16:02.963309    2960 main.go:141] libmachine: SSH cmd err, output: <nil>: 1692616563.010512463
	
	I0821 04:16:02.963315    2960 fix.go:206] guest clock: 1692616563.010512463
	I0821 04:16:02.963318    2960 fix.go:219] Guest: 2023-08-21 04:16:03.010512463 -0700 PDT Remote: 2023-08-21 04:16:02.912075 -0700 PDT m=+0.609709334 (delta=98.437463ms)
	I0821 04:16:02.963329    2960 fix.go:190] guest clock delta is within tolerance: 98.437463ms
	I0821 04:16:02.963331    2960 start.go:83] releasing machines lock for "functional-818000", held for 558.296833ms
	I0821 04:16:02.963653    2960 ssh_runner.go:195] Run: cat /version.json
	I0821 04:16:02.963665    2960 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0821 04:16:02.963666    2960 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/functional-818000/id_rsa Username:docker}
	I0821 04:16:02.963682    2960 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/functional-818000/id_rsa Username:docker}
	I0821 04:16:03.029912    2960 ssh_runner.go:195] Run: systemctl --version
	I0821 04:16:03.031900    2960 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0821 04:16:03.033534    2960 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0821 04:16:03.033564    2960 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0821 04:16:03.036348    2960 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0821 04:16:03.036352    2960 start.go:466] detecting cgroup driver to use...
	I0821 04:16:03.036414    2960 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0821 04:16:03.041966    2960 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0821 04:16:03.045628    2960 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0821 04:16:03.049225    2960 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0821 04:16:03.049247    2960 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0821 04:16:03.052483    2960 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0821 04:16:03.055719    2960 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0821 04:16:03.058560    2960 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0821 04:16:03.061897    2960 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0821 04:16:03.065608    2960 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0821 04:16:03.069035    2960 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0821 04:16:03.072104    2960 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0821 04:16:03.074788    2960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0821 04:16:03.160272    2960 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0821 04:16:03.170997    2960 start.go:466] detecting cgroup driver to use...
	I0821 04:16:03.171051    2960 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0821 04:16:03.176997    2960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0821 04:16:03.182164    2960 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0821 04:16:03.191156    2960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0821 04:16:03.195621    2960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0821 04:16:03.200191    2960 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0821 04:16:03.205716    2960 ssh_runner.go:195] Run: which cri-dockerd
	I0821 04:16:03.206998    2960 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0821 04:16:03.209875    2960 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0821 04:16:03.215062    2960 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0821 04:16:03.300277    2960 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0821 04:16:03.395052    2960 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0821 04:16:03.395060    2960 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0821 04:16:03.400145    2960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0821 04:16:03.481340    2960 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0821 04:16:14.810270    2960 ssh_runner.go:235] Completed: sudo systemctl restart docker: (11.329014667s)
	I0821 04:16:14.810344    2960 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0821 04:16:14.873838    2960 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0821 04:16:14.939244    2960 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0821 04:16:15.023376    2960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0821 04:16:15.088117    2960 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0821 04:16:15.095803    2960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0821 04:16:15.193670    2960 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0821 04:16:15.219077    2960 start.go:513] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0821 04:16:15.219150    2960 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0821 04:16:15.221961    2960 start.go:534] Will wait 60s for crictl version
	I0821 04:16:15.222015    2960 ssh_runner.go:195] Run: which crictl
	I0821 04:16:15.223586    2960 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0821 04:16:15.237125    2960 start.go:550] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.4
	RuntimeApiVersion:  v1alpha2
	I0821 04:16:15.237201    2960 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0821 04:16:15.244955    2960 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0821 04:16:15.256543    2960 out.go:204] * Preparing Kubernetes v1.27.4 on Docker 24.0.4 ...
	I0821 04:16:15.256680    2960 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0821 04:16:15.264479    2960 out.go:177]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I0821 04:16:15.267506    2960 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime docker
	I0821 04:16:15.267559    2960 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0821 04:16:15.273473    2960 docker.go:636] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-818000
	registry.k8s.io/kube-apiserver:v1.27.4
	registry.k8s.io/kube-controller-manager:v1.27.4
	registry.k8s.io/kube-scheduler:v1.27.4
	registry.k8s.io/kube-proxy:v1.27.4
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I0821 04:16:15.273482    2960 docker.go:566] Images already preloaded, skipping extraction
	I0821 04:16:15.273533    2960 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0821 04:16:15.279161    2960 docker.go:636] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-818000
	registry.k8s.io/kube-apiserver:v1.27.4
	registry.k8s.io/kube-scheduler:v1.27.4
	registry.k8s.io/kube-controller-manager:v1.27.4
	registry.k8s.io/kube-proxy:v1.27.4
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I0821 04:16:15.279167    2960 cache_images.go:84] Images are preloaded, skipping loading
	I0821 04:16:15.279234    2960 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0821 04:16:15.289432    2960 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I0821 04:16:15.289452    2960 cni.go:84] Creating CNI manager for ""
	I0821 04:16:15.289457    2960 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0821 04:16:15.289461    2960 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0821 04:16:15.289473    2960 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.4 APIServerPort:8441 KubernetesVersion:v1.27.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-818000 NodeName:functional-818000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.4"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.4 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOp
ts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0821 04:16:15.289537    2960 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.4
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-818000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.4
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.4"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0821 04:16:15.289564    2960 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=functional-818000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.4 ClusterName:functional-818000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:}
	I0821 04:16:15.289617    2960 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.4
	I0821 04:16:15.292678    2960 binaries.go:44] Found k8s binaries, skipping transfer
	I0821 04:16:15.292701    2960 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0821 04:16:15.295730    2960 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0821 04:16:15.301109    2960 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0821 04:16:15.306210    2960 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1953 bytes)
	I0821 04:16:15.311304    2960 ssh_runner.go:195] Run: grep 192.168.105.4	control-plane.minikube.internal$ /etc/hosts
	I0821 04:16:15.312873    2960 certs.go:56] Setting up /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/functional-818000 for IP: 192.168.105.4
	I0821 04:16:15.312892    2960 certs.go:190] acquiring lock for shared ca certs: {Name:mkaf8bee91c9bef113528e728629bac5c142d5d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 04:16:15.313034    2960 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/17102-920/.minikube/ca.key
	I0821 04:16:15.313076    2960 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/17102-920/.minikube/proxy-client-ca.key
	I0821 04:16:15.313135    2960 certs.go:315] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/functional-818000/client.key
	I0821 04:16:15.313181    2960 certs.go:315] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/functional-818000/apiserver.key.942c473b
	I0821 04:16:15.313237    2960 certs.go:315] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/functional-818000/proxy-client.key
	I0821 04:16:15.313377    2960 certs.go:437] found cert: /Users/jenkins/minikube-integration/17102-920/.minikube/certs/Users/jenkins/minikube-integration/17102-920/.minikube/certs/1362.pem (1338 bytes)
	W0821 04:16:15.313400    2960 certs.go:433] ignoring /Users/jenkins/minikube-integration/17102-920/.minikube/certs/Users/jenkins/minikube-integration/17102-920/.minikube/certs/1362_empty.pem, impossibly tiny 0 bytes
	I0821 04:16:15.313405    2960 certs.go:437] found cert: /Users/jenkins/minikube-integration/17102-920/.minikube/certs/Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca-key.pem (1679 bytes)
	I0821 04:16:15.313425    2960 certs.go:437] found cert: /Users/jenkins/minikube-integration/17102-920/.minikube/certs/Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca.pem (1078 bytes)
	I0821 04:16:15.313449    2960 certs.go:437] found cert: /Users/jenkins/minikube-integration/17102-920/.minikube/certs/Users/jenkins/minikube-integration/17102-920/.minikube/certs/cert.pem (1123 bytes)
	I0821 04:16:15.313465    2960 certs.go:437] found cert: /Users/jenkins/minikube-integration/17102-920/.minikube/certs/Users/jenkins/minikube-integration/17102-920/.minikube/certs/key.pem (1679 bytes)
	I0821 04:16:15.313505    2960 certs.go:437] found cert: /Users/jenkins/minikube-integration/17102-920/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/17102-920/.minikube/files/etc/ssl/certs/13622.pem (1708 bytes)
	I0821 04:16:15.313845    2960 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/functional-818000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0821 04:16:15.320634    2960 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/functional-818000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0821 04:16:15.328084    2960 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/functional-818000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0821 04:16:15.335428    2960 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/functional-818000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0821 04:16:15.342613    2960 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0821 04:16:15.349745    2960 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0821 04:16:15.356439    2960 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0821 04:16:15.364089    2960 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0821 04:16:15.371459    2960 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/files/etc/ssl/certs/13622.pem --> /usr/share/ca-certificates/13622.pem (1708 bytes)
	I0821 04:16:15.379019    2960 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0821 04:16:15.385337    2960 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/certs/1362.pem --> /usr/share/ca-certificates/1362.pem (1338 bytes)
	I0821 04:16:15.392236    2960 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0821 04:16:15.397485    2960 ssh_runner.go:195] Run: openssl version
	I0821 04:16:15.399439    2960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1362.pem && ln -fs /usr/share/ca-certificates/1362.pem /etc/ssl/certs/1362.pem"
	I0821 04:16:15.402757    2960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1362.pem
	I0821 04:16:15.404412    2960 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Aug 21 11:14 /usr/share/ca-certificates/1362.pem
	I0821 04:16:15.404432    2960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1362.pem
	I0821 04:16:15.406433    2960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1362.pem /etc/ssl/certs/51391683.0"
	I0821 04:16:15.409196    2960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13622.pem && ln -fs /usr/share/ca-certificates/13622.pem /etc/ssl/certs/13622.pem"
	I0821 04:16:15.412652    2960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13622.pem
	I0821 04:16:15.414182    2960 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Aug 21 11:14 /usr/share/ca-certificates/13622.pem
	I0821 04:16:15.414198    2960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13622.pem
	I0821 04:16:15.416031    2960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/13622.pem /etc/ssl/certs/3ec20f2e.0"
	I0821 04:16:15.419207    2960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0821 04:16:15.422266    2960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0821 04:16:15.423796    2960 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 21 10:34 /usr/share/ca-certificates/minikubeCA.pem
	I0821 04:16:15.423814    2960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0821 04:16:15.427598    2960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0821 04:16:15.430685    2960 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0821 04:16:15.432120    2960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0821 04:16:15.434101    2960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0821 04:16:15.435910    2960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0821 04:16:15.437819    2960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0821 04:16:15.439493    2960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0821 04:16:15.441505    2960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0821 04:16:15.443315    2960 kubeadm.go:404] StartCluster: {Name:functional-818000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:functional-818000
Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountStrin
g:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0821 04:16:15.443376    2960 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0821 04:16:15.453249    2960 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0821 04:16:15.456471    2960 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0821 04:16:15.456479    2960 kubeadm.go:636] restartCluster start
	I0821 04:16:15.456504    2960 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0821 04:16:15.459776    2960 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0821 04:16:15.460987    2960 kubeconfig.go:92] found "functional-818000" server: "https://192.168.105.4:8441"
	I0821 04:16:15.461747    2960 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0821 04:16:15.464648    2960 kubeadm.go:602] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -22,7 +22,7 @@
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.105.4"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    enable-admission-plugins: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     allocate-node-cidrs: "true"
	
	-- /stdout --
	I0821 04:16:15.464650    2960 kubeadm.go:1128] stopping kube-system containers ...
	I0821 04:16:15.464685    2960 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0821 04:16:15.472300    2960 docker.go:462] Stopping containers: [dd4174f7048e 782163e35398 3b33366d1b18 6f28b21a03ca ade3b22ef579 102aa0a24050 668c90460f58 c891ec2490eb f90be777672b 85684db8ec50 7ec760ca0e63 3f4ba66db620 b32202bb2050 86a5acd3e0ad 7e37e2566db6 9f9bcf3a23aa 475a21582370 b56615e7892e 36073f2bd007 36f192c9b0e0 e5f7aef5c059 88e138cff2b1 1ad33ecefcdf 7edcfb361b09 8020e7dabfbe 0dc8b606fab4 b27765e2faa6 28fbcabf961b]
	I0821 04:16:15.472354    2960 ssh_runner.go:195] Run: docker stop dd4174f7048e 782163e35398 3b33366d1b18 6f28b21a03ca ade3b22ef579 102aa0a24050 668c90460f58 c891ec2490eb f90be777672b 85684db8ec50 7ec760ca0e63 3f4ba66db620 b32202bb2050 86a5acd3e0ad 7e37e2566db6 9f9bcf3a23aa 475a21582370 b56615e7892e 36073f2bd007 36f192c9b0e0 e5f7aef5c059 88e138cff2b1 1ad33ecefcdf 7edcfb361b09 8020e7dabfbe 0dc8b606fab4 b27765e2faa6 28fbcabf961b
	I0821 04:16:15.479340    2960 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0821 04:16:15.564538    2960 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0821 04:16:15.568989    2960 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Aug 21 11:14 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5657 Aug 21 11:14 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Aug 21 11:14 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5605 Aug 21 11:14 /etc/kubernetes/scheduler.conf
	
	I0821 04:16:15.569016    2960 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I0821 04:16:15.572847    2960 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I0821 04:16:15.575988    2960 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I0821 04:16:15.579489    2960 kubeadm.go:166] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0821 04:16:15.579512    2960 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0821 04:16:15.582731    2960 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I0821 04:16:15.585429    2960 kubeadm.go:166] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0821 04:16:15.585450    2960 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0821 04:16:15.588416    2960 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0821 04:16:15.591637    2960 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0821 04:16:15.591640    2960 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0821 04:16:15.612985    2960 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0821 04:16:16.014897    2960 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0821 04:16:16.114629    2960 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0821 04:16:16.142146    2960 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0821 04:16:16.180218    2960 api_server.go:52] waiting for apiserver process to appear ...
	I0821 04:16:16.180265    2960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0821 04:16:16.184180    2960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0821 04:16:16.690447    2960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0821 04:16:17.190452    2960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0821 04:16:17.194569    2960 api_server.go:72] duration metric: took 1.014360875s to wait for apiserver process to appear ...
	I0821 04:16:17.194573    2960 api_server.go:88] waiting for apiserver healthz status ...
	I0821 04:16:17.194587    2960 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0821 04:16:19.937458    2960 api_server.go:279] https://192.168.105.4:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0821 04:16:19.937465    2960 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0821 04:16:19.937470    2960 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0821 04:16:19.994664    2960 api_server.go:279] https://192.168.105.4:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0821 04:16:19.994671    2960 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0821 04:16:20.496701    2960 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0821 04:16:20.500319    2960 api_server.go:279] https://192.168.105.4:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0821 04:16:20.500327    2960 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0821 04:16:20.996725    2960 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0821 04:16:20.999936    2960 api_server.go:279] https://192.168.105.4:8441/healthz returned 200:
	ok
	I0821 04:16:21.005272    2960 api_server.go:141] control plane version: v1.27.4
	I0821 04:16:21.005277    2960 api_server.go:131] duration metric: took 3.810734333s to wait for apiserver health ...
	I0821 04:16:21.005281    2960 cni.go:84] Creating CNI manager for ""
	I0821 04:16:21.005285    2960 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0821 04:16:21.009374    2960 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0821 04:16:21.013435    2960 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0821 04:16:21.016509    2960 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0821 04:16:21.021501    2960 system_pods.go:43] waiting for kube-system pods to appear ...
	I0821 04:16:21.025989    2960 system_pods.go:59] 7 kube-system pods found
	I0821 04:16:21.025996    2960 system_pods.go:61] "coredns-5d78c9869d-vf68m" [bd06a222-5e84-45c2-9d2b-cbcb234a649f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0821 04:16:21.025999    2960 system_pods.go:61] "etcd-functional-818000" [e3a3c628-63c0-4e03-8a01-2c138fee28a5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0821 04:16:21.026003    2960 system_pods.go:61] "kube-apiserver-functional-818000" [a9bfb3d5-7f96-47e9-bac9-848036bbe1c9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0821 04:16:21.026005    2960 system_pods.go:61] "kube-controller-manager-functional-818000" [3f28b122-e0dc-451f-bb5d-49c4e8acbaba] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0821 04:16:21.026008    2960 system_pods.go:61] "kube-proxy-cln6c" [18c47362-aae2-46a2-be4d-16ff6d349cef] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0821 04:16:21.026010    2960 system_pods.go:61] "kube-scheduler-functional-818000" [cbcc3d8c-f71e-487d-aa51-5efe0b225e39] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0821 04:16:21.026012    2960 system_pods.go:61] "storage-provisioner" [497a18a3-4473-413e-bf26-83b0fdbae4cf] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0821 04:16:21.026014    2960 system_pods.go:74] duration metric: took 4.510833ms to wait for pod list to return data ...
	I0821 04:16:21.026016    2960 node_conditions.go:102] verifying NodePressure condition ...
	I0821 04:16:21.027449    2960 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0821 04:16:21.027456    2960 node_conditions.go:123] node cpu capacity is 2
	I0821 04:16:21.027460    2960 node_conditions.go:105] duration metric: took 1.442291ms to run NodePressure ...
	I0821 04:16:21.027466    2960 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0821 04:16:21.119272    2960 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0821 04:16:21.121530    2960 kubeadm.go:787] kubelet initialised
	I0821 04:16:21.121534    2960 kubeadm.go:788] duration metric: took 2.25575ms waiting for restarted kubelet to initialise ...
	I0821 04:16:21.121538    2960 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0821 04:16:21.124536    2960 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5d78c9869d-vf68m" in "kube-system" namespace to be "Ready" ...
	I0821 04:16:23.145291    2960 pod_ready.go:102] pod "coredns-5d78c9869d-vf68m" in "kube-system" namespace has status "Ready":"False"
	I0821 04:16:25.147205    2960 pod_ready.go:102] pod "coredns-5d78c9869d-vf68m" in "kube-system" namespace has status "Ready":"False"
	I0821 04:16:26.145248    2960 pod_ready.go:92] pod "coredns-5d78c9869d-vf68m" in "kube-system" namespace has status "Ready":"True"
	I0821 04:16:26.145273    2960 pod_ready.go:81] duration metric: took 5.020768416s waiting for pod "coredns-5d78c9869d-vf68m" in "kube-system" namespace to be "Ready" ...
	I0821 04:16:26.145288    2960 pod_ready.go:78] waiting up to 4m0s for pod "etcd-functional-818000" in "kube-system" namespace to be "Ready" ...
	I0821 04:16:28.177876    2960 pod_ready.go:102] pod "etcd-functional-818000" in "kube-system" namespace has status "Ready":"False"
	I0821 04:16:30.679621    2960 pod_ready.go:92] pod "etcd-functional-818000" in "kube-system" namespace has status "Ready":"True"
	I0821 04:16:30.679645    2960 pod_ready.go:81] duration metric: took 4.534384666s waiting for pod "etcd-functional-818000" in "kube-system" namespace to be "Ready" ...
	I0821 04:16:30.679661    2960 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-functional-818000" in "kube-system" namespace to be "Ready" ...
	I0821 04:16:32.702341    2960 pod_ready.go:102] pod "kube-apiserver-functional-818000" in "kube-system" namespace has status "Ready":"False"
	I0821 04:16:34.711654    2960 pod_ready.go:102] pod "kube-apiserver-functional-818000" in "kube-system" namespace has status "Ready":"False"
	I0821 04:16:36.205293    2960 pod_ready.go:92] pod "kube-apiserver-functional-818000" in "kube-system" namespace has status "Ready":"True"
	I0821 04:16:36.205305    2960 pod_ready.go:81] duration metric: took 5.525683084s waiting for pod "kube-apiserver-functional-818000" in "kube-system" namespace to be "Ready" ...
	I0821 04:16:36.205315    2960 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-functional-818000" in "kube-system" namespace to be "Ready" ...
	I0821 04:16:36.213241    2960 pod_ready.go:92] pod "kube-controller-manager-functional-818000" in "kube-system" namespace has status "Ready":"True"
	I0821 04:16:36.213246    2960 pod_ready.go:81] duration metric: took 7.925542ms waiting for pod "kube-controller-manager-functional-818000" in "kube-system" namespace to be "Ready" ...
	I0821 04:16:36.213252    2960 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-cln6c" in "kube-system" namespace to be "Ready" ...
	I0821 04:16:36.218287    2960 pod_ready.go:92] pod "kube-proxy-cln6c" in "kube-system" namespace has status "Ready":"True"
	I0821 04:16:36.218291    2960 pod_ready.go:81] duration metric: took 5.035291ms waiting for pod "kube-proxy-cln6c" in "kube-system" namespace to be "Ready" ...
	I0821 04:16:36.218296    2960 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-functional-818000" in "kube-system" namespace to be "Ready" ...
	I0821 04:16:36.222418    2960 pod_ready.go:92] pod "kube-scheduler-functional-818000" in "kube-system" namespace has status "Ready":"True"
	I0821 04:16:36.222423    2960 pod_ready.go:81] duration metric: took 4.122875ms waiting for pod "kube-scheduler-functional-818000" in "kube-system" namespace to be "Ready" ...
	I0821 04:16:36.222430    2960 pod_ready.go:38] duration metric: took 15.101016791s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0821 04:16:36.222459    2960 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0821 04:16:36.229449    2960 ops.go:34] apiserver oom_adj: -16
	I0821 04:16:36.229456    2960 kubeadm.go:640] restartCluster took 20.773150166s
	I0821 04:16:36.229461    2960 kubeadm.go:406] StartCluster complete in 20.786323791s
	I0821 04:16:36.229475    2960 settings.go:142] acquiring lock: {Name:mkeb461ec3a6a92ee32ce41e8df63d6759cb2728 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 04:16:36.229641    2960 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/17102-920/kubeconfig
	I0821 04:16:36.230310    2960 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17102-920/kubeconfig: {Name:mk2bc9c64ad130c36a0253707ac2ba3f8fd22371 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 04:16:36.230648    2960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0821 04:16:36.230687    2960 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0821 04:16:36.230751    2960 addons.go:69] Setting storage-provisioner=true in profile "functional-818000"
	I0821 04:16:36.230757    2960 addons.go:69] Setting default-storageclass=true in profile "functional-818000"
	I0821 04:16:36.230762    2960 addons.go:231] Setting addon storage-provisioner=true in "functional-818000"
	W0821 04:16:36.230765    2960 addons.go:240] addon storage-provisioner should already be in state true
	I0821 04:16:36.230774    2960 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-818000"
	I0821 04:16:36.230797    2960 host.go:66] Checking if "functional-818000" exists ...
	I0821 04:16:36.230826    2960 config.go:182] Loaded profile config "functional-818000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0821 04:16:36.235463    2960 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0821 04:16:36.239781    2960 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0821 04:16:36.239788    2960 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0821 04:16:36.239798    2960 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/functional-818000/id_rsa Username:docker}
	I0821 04:16:36.240415    2960 kapi.go:248] "coredns" deployment in "kube-system" namespace and "functional-818000" context rescaled to 1 replicas
	I0821 04:16:36.240430    2960 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0821 04:16:36.243701    2960 out.go:177] * Verifying Kubernetes components...
	I0821 04:16:36.242894    2960 addons.go:231] Setting addon default-storageclass=true in "functional-818000"
	W0821 04:16:36.251675    2960 addons.go:240] addon default-storageclass should already be in state true
	I0821 04:16:36.251693    2960 host.go:66] Checking if "functional-818000" exists ...
	I0821 04:16:36.251709    2960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0821 04:16:36.252592    2960 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0821 04:16:36.252596    2960 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0821 04:16:36.252601    2960 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/functional-818000/id_rsa Username:docker}
	I0821 04:16:36.290233    2960 node_ready.go:35] waiting up to 6m0s for node "functional-818000" to be "Ready" ...
	I0821 04:16:36.291762    2960 node_ready.go:49] node "functional-818000" has status "Ready":"True"
	I0821 04:16:36.291765    2960 node_ready.go:38] duration metric: took 1.523125ms waiting for node "functional-818000" to be "Ready" ...
	I0821 04:16:36.291767    2960 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0821 04:16:36.291964    2960 start.go:874] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0821 04:16:36.294724    2960 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-vf68m" in "kube-system" namespace to be "Ready" ...
	I0821 04:16:36.298138    2960 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0821 04:16:36.301126    2960 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0821 04:16:36.601316    2960 pod_ready.go:92] pod "coredns-5d78c9869d-vf68m" in "kube-system" namespace has status "Ready":"True"
	I0821 04:16:36.601321    2960 pod_ready.go:81] duration metric: took 306.594125ms waiting for pod "coredns-5d78c9869d-vf68m" in "kube-system" namespace to be "Ready" ...
	I0821 04:16:36.601325    2960 pod_ready.go:78] waiting up to 6m0s for pod "etcd-functional-818000" in "kube-system" namespace to be "Ready" ...
	I0821 04:16:36.667049    2960 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0821 04:16:36.675042    2960 addons.go:502] enable addons completed in 444.371625ms: enabled=[default-storageclass storage-provisioner]
	I0821 04:16:37.004357    2960 pod_ready.go:92] pod "etcd-functional-818000" in "kube-system" namespace has status "Ready":"True"
	I0821 04:16:37.004385    2960 pod_ready.go:81] duration metric: took 403.051667ms waiting for pod "etcd-functional-818000" in "kube-system" namespace to be "Ready" ...
	I0821 04:16:37.004403    2960 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-functional-818000" in "kube-system" namespace to be "Ready" ...
	I0821 04:16:37.401536    2960 pod_ready.go:92] pod "kube-apiserver-functional-818000" in "kube-system" namespace has status "Ready":"True"
	I0821 04:16:37.401541    2960 pod_ready.go:81] duration metric: took 397.135625ms waiting for pod "kube-apiserver-functional-818000" in "kube-system" namespace to be "Ready" ...
	I0821 04:16:37.401545    2960 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-functional-818000" in "kube-system" namespace to be "Ready" ...
	I0821 04:16:37.808187    2960 pod_ready.go:92] pod "kube-controller-manager-functional-818000" in "kube-system" namespace has status "Ready":"True"
	I0821 04:16:37.808219    2960 pod_ready.go:81] duration metric: took 406.667458ms waiting for pod "kube-controller-manager-functional-818000" in "kube-system" namespace to be "Ready" ...
	I0821 04:16:37.808246    2960 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-cln6c" in "kube-system" namespace to be "Ready" ...
	I0821 04:16:38.203932    2960 pod_ready.go:92] pod "kube-proxy-cln6c" in "kube-system" namespace has status "Ready":"True"
	I0821 04:16:38.203953    2960 pod_ready.go:81] duration metric: took 395.701ms waiting for pod "kube-proxy-cln6c" in "kube-system" namespace to be "Ready" ...
	I0821 04:16:38.204027    2960 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-functional-818000" in "kube-system" namespace to be "Ready" ...
	I0821 04:16:38.604220    2960 pod_ready.go:92] pod "kube-scheduler-functional-818000" in "kube-system" namespace has status "Ready":"True"
	I0821 04:16:38.604231    2960 pod_ready.go:81] duration metric: took 400.201667ms waiting for pod "kube-scheduler-functional-818000" in "kube-system" namespace to be "Ready" ...
	I0821 04:16:38.604241    2960 pod_ready.go:38] duration metric: took 2.312488208s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0821 04:16:38.604256    2960 api_server.go:52] waiting for apiserver process to appear ...
	I0821 04:16:38.604386    2960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0821 04:16:38.615699    2960 api_server.go:72] duration metric: took 2.375274834s to wait for apiserver process to appear ...
	I0821 04:16:38.615707    2960 api_server.go:88] waiting for apiserver healthz status ...
	I0821 04:16:38.615718    2960 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0821 04:16:38.622422    2960 api_server.go:279] https://192.168.105.4:8441/healthz returned 200:
	ok
	I0821 04:16:38.623476    2960 api_server.go:141] control plane version: v1.27.4
	I0821 04:16:38.623482    2960 api_server.go:131] duration metric: took 7.771708ms to wait for apiserver health ...
	I0821 04:16:38.623486    2960 system_pods.go:43] waiting for kube-system pods to appear ...
	I0821 04:16:38.814299    2960 system_pods.go:59] 7 kube-system pods found
	I0821 04:16:38.814328    2960 system_pods.go:61] "coredns-5d78c9869d-vf68m" [bd06a222-5e84-45c2-9d2b-cbcb234a649f] Running
	I0821 04:16:38.814337    2960 system_pods.go:61] "etcd-functional-818000" [e3a3c628-63c0-4e03-8a01-2c138fee28a5] Running
	I0821 04:16:38.814350    2960 system_pods.go:61] "kube-apiserver-functional-818000" [a9bfb3d5-7f96-47e9-bac9-848036bbe1c9] Running
	I0821 04:16:38.814358    2960 system_pods.go:61] "kube-controller-manager-functional-818000" [3f28b122-e0dc-451f-bb5d-49c4e8acbaba] Running
	I0821 04:16:38.814364    2960 system_pods.go:61] "kube-proxy-cln6c" [18c47362-aae2-46a2-be4d-16ff6d349cef] Running
	I0821 04:16:38.814371    2960 system_pods.go:61] "kube-scheduler-functional-818000" [cbcc3d8c-f71e-487d-aa51-5efe0b225e39] Running
	I0821 04:16:38.814378    2960 system_pods.go:61] "storage-provisioner" [497a18a3-4473-413e-bf26-83b0fdbae4cf] Running
	I0821 04:16:38.814389    2960 system_pods.go:74] duration metric: took 190.899166ms to wait for pod list to return data ...
	I0821 04:16:38.814401    2960 default_sa.go:34] waiting for default service account to be created ...
	I0821 04:16:39.004658    2960 default_sa.go:45] found service account: "default"
	I0821 04:16:39.004668    2960 default_sa.go:55] duration metric: took 190.263042ms for default service account to be created ...
	I0821 04:16:39.004676    2960 system_pods.go:116] waiting for k8s-apps to be running ...
	I0821 04:16:39.214035    2960 system_pods.go:86] 7 kube-system pods found
	I0821 04:16:39.214065    2960 system_pods.go:89] "coredns-5d78c9869d-vf68m" [bd06a222-5e84-45c2-9d2b-cbcb234a649f] Running
	I0821 04:16:39.214074    2960 system_pods.go:89] "etcd-functional-818000" [e3a3c628-63c0-4e03-8a01-2c138fee28a5] Running
	I0821 04:16:39.214082    2960 system_pods.go:89] "kube-apiserver-functional-818000" [a9bfb3d5-7f96-47e9-bac9-848036bbe1c9] Running
	I0821 04:16:39.214092    2960 system_pods.go:89] "kube-controller-manager-functional-818000" [3f28b122-e0dc-451f-bb5d-49c4e8acbaba] Running
	I0821 04:16:39.214099    2960 system_pods.go:89] "kube-proxy-cln6c" [18c47362-aae2-46a2-be4d-16ff6d349cef] Running
	I0821 04:16:39.214114    2960 system_pods.go:89] "kube-scheduler-functional-818000" [cbcc3d8c-f71e-487d-aa51-5efe0b225e39] Running
	I0821 04:16:39.214120    2960 system_pods.go:89] "storage-provisioner" [497a18a3-4473-413e-bf26-83b0fdbae4cf] Running
	I0821 04:16:39.214133    2960 system_pods.go:126] duration metric: took 209.453709ms to wait for k8s-apps to be running ...
	I0821 04:16:39.214142    2960 system_svc.go:44] waiting for kubelet service to be running ....
	I0821 04:16:39.214337    2960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0821 04:16:39.230679    2960 system_svc.go:56] duration metric: took 16.533667ms WaitForService to wait for kubelet.
	I0821 04:16:39.230690    2960 kubeadm.go:581] duration metric: took 2.9902705s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0821 04:16:39.230709    2960 node_conditions.go:102] verifying NodePressure condition ...
	I0821 04:16:39.407338    2960 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0821 04:16:39.407367    2960 node_conditions.go:123] node cpu capacity is 2
	I0821 04:16:39.407389    2960 node_conditions.go:105] duration metric: took 176.674042ms to run NodePressure ...
	I0821 04:16:39.407410    2960 start.go:228] waiting for startup goroutines ...
	I0821 04:16:39.407425    2960 start.go:233] waiting for cluster config update ...
	I0821 04:16:39.407445    2960 start.go:242] writing updated cluster config ...
	I0821 04:16:39.408750    2960 ssh_runner.go:195] Run: rm -f paused
	I0821 04:16:39.472809    2960 start.go:600] kubectl: 1.27.2, cluster: 1.27.4 (minor skew: 0)
	I0821 04:16:39.476140    2960 out.go:177] * Done! kubectl is now configured to use "functional-818000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Mon 2023-08-21 11:14:35 UTC, ends at Mon 2023-08-21 11:17:37 UTC. --
	Aug 21 11:17:25 functional-818000 dockerd[7079]: time="2023-08-21T11:17:25.199179451Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 21 11:17:25 functional-818000 dockerd[7079]: time="2023-08-21T11:17:25.199408995Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 21 11:17:25 functional-818000 cri-dockerd[7335]: time="2023-08-21T11:17:25Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/fc625d933ebf194c178cc7b93542ee6900d13b7de761e1c8686a6bd4502a27cb/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Aug 21 11:17:26 functional-818000 cri-dockerd[7335]: time="2023-08-21T11:17:26Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	Aug 21 11:17:26 functional-818000 dockerd[7079]: time="2023-08-21T11:17:26.569736156Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 21 11:17:26 functional-818000 dockerd[7079]: time="2023-08-21T11:17:26.569764115Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 21 11:17:26 functional-818000 dockerd[7079]: time="2023-08-21T11:17:26.569773240Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 21 11:17:26 functional-818000 dockerd[7079]: time="2023-08-21T11:17:26.569779282Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 21 11:17:26 functional-818000 dockerd[7072]: time="2023-08-21T11:17:26.623347539Z" level=info msg="ignoring event" container=87f5587b403fc195bbda61db7f70ef6c7b159c95a78d3a2dd977e5201f9e12ff module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 21 11:17:26 functional-818000 dockerd[7079]: time="2023-08-21T11:17:26.623596417Z" level=info msg="shim disconnected" id=87f5587b403fc195bbda61db7f70ef6c7b159c95a78d3a2dd977e5201f9e12ff namespace=moby
	Aug 21 11:17:26 functional-818000 dockerd[7079]: time="2023-08-21T11:17:26.623625459Z" level=warning msg="cleaning up after shim disconnected" id=87f5587b403fc195bbda61db7f70ef6c7b159c95a78d3a2dd977e5201f9e12ff namespace=moby
	Aug 21 11:17:26 functional-818000 dockerd[7079]: time="2023-08-21T11:17:26.623630209Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 21 11:17:26 functional-818000 dockerd[7079]: time="2023-08-21T11:17:26.631218352Z" level=warning msg="cleanup warnings time=\"2023-08-21T11:17:26Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Aug 21 11:17:28 functional-818000 dockerd[7072]: time="2023-08-21T11:17:28.451932180Z" level=info msg="ignoring event" container=fc625d933ebf194c178cc7b93542ee6900d13b7de761e1c8686a6bd4502a27cb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 21 11:17:28 functional-818000 dockerd[7079]: time="2023-08-21T11:17:28.451855887Z" level=info msg="shim disconnected" id=fc625d933ebf194c178cc7b93542ee6900d13b7de761e1c8686a6bd4502a27cb namespace=moby
	Aug 21 11:17:28 functional-818000 dockerd[7079]: time="2023-08-21T11:17:28.452209434Z" level=warning msg="cleaning up after shim disconnected" id=fc625d933ebf194c178cc7b93542ee6900d13b7de761e1c8686a6bd4502a27cb namespace=moby
	Aug 21 11:17:28 functional-818000 dockerd[7079]: time="2023-08-21T11:17:28.452216559Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 21 11:17:29 functional-818000 dockerd[7079]: time="2023-08-21T11:17:29.324446249Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 21 11:17:29 functional-818000 dockerd[7079]: time="2023-08-21T11:17:29.324479166Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 21 11:17:29 functional-818000 dockerd[7079]: time="2023-08-21T11:17:29.324486249Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 21 11:17:29 functional-818000 dockerd[7079]: time="2023-08-21T11:17:29.324490624Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 21 11:17:29 functional-818000 dockerd[7079]: time="2023-08-21T11:17:29.375644755Z" level=info msg="shim disconnected" id=bdfde4bb0dc4fedbf744b55befa3ef45273aaae65c77079b43d356e7a07c20ab namespace=moby
	Aug 21 11:17:29 functional-818000 dockerd[7072]: time="2023-08-21T11:17:29.375817132Z" level=info msg="ignoring event" container=bdfde4bb0dc4fedbf744b55befa3ef45273aaae65c77079b43d356e7a07c20ab module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 21 11:17:29 functional-818000 dockerd[7079]: time="2023-08-21T11:17:29.375863091Z" level=warning msg="cleaning up after shim disconnected" id=bdfde4bb0dc4fedbf744b55befa3ef45273aaae65c77079b43d356e7a07c20ab namespace=moby
	Aug 21 11:17:29 functional-818000 dockerd[7079]: time="2023-08-21T11:17:29.375882592Z" level=info msg="cleaning up dead shim" namespace=moby
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID
	bdfde4bb0dc4f       72565bf5bbedf                                                                                         8 seconds ago        Exited              echoserver-arm            3                   de860513dad50
	87f5587b403fc       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   11 seconds ago       Exited              mount-munger              0                   fc625d933ebf1
	f8f4e6973c8f4       72565bf5bbedf                                                                                         17 seconds ago       Exited              echoserver-arm            2                   48eb0d7727aaf
	a82d7601838f6       nginx@sha256:104c7c5c54f2685f0f46f3be607ce60da7085da3eaa5ad22d3d9f01594295e9c                         21 seconds ago       Running             myfrontend                0                   d1d43b9860828
	10cdbdd9666bf       nginx@sha256:cac882be2b7305e0c8d3e3cd0575a2fd58f5fde6dd5d6299605aa0f3e67ca385                         38 seconds ago       Running             nginx                     0                   91295f6669de3
	09358712b5161       ba04bb24b9575                                                                                         About a minute ago   Running             storage-provisioner       2                   ebc0459a039a0
	1512bf9e0cc98       97e04611ad434                                                                                         About a minute ago   Running             coredns                   2                   04a4469c8894d
	e6bf09e2d248f       532e5a30e948f                                                                                         About a minute ago   Running             kube-proxy                2                   288f36c6ae8b3
	8ef2ca90579f5       24bc64e911039                                                                                         About a minute ago   Running             etcd                      2                   e07ce0657bb5d
	f547063d88e10       389f6f052cf83                                                                                         About a minute ago   Running             kube-controller-manager   2                   c137d5923a52f
	36eb9587ac326       64aece92d6bde                                                                                         About a minute ago   Running             kube-apiserver            0                   d39068f3b5cb1
	f6178a0539ee3       6eb63895cb67f                                                                                         About a minute ago   Running             kube-scheduler            2                   0aa52b0764455
	dd4174f7048e0       97e04611ad434                                                                                         2 minutes ago        Exited              coredns                   1                   c891ec2490ebc
	782163e35398b       532e5a30e948f                                                                                         2 minutes ago        Exited              kube-proxy                1                   7ec760ca0e63c
	3b33366d1b185       6eb63895cb67f                                                                                         2 minutes ago        Exited              kube-scheduler            1                   3f4ba66db620f
	6f28b21a03ca8       24bc64e911039                                                                                         2 minutes ago        Exited              etcd                      1                   85684db8ec506
	102aa0a24050d       389f6f052cf83                                                                                         2 minutes ago        Exited              kube-controller-manager   1                   f90be777672be
	668c90460f58a       ba04bb24b9575                                                                                         2 minutes ago        Exited              storage-provisioner       1                   86a5acd3e0ade
	
	* 
	* ==> coredns [1512bf9e0cc9] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:58973 - 13764 "HINFO IN 2339834959597515637.7109534419667711480. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.005269393s
	[INFO] 10.244.0.1:62207 - 41917 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 4096" NOERROR qr,aa,rd 104 0.000092958s
	[INFO] 10.244.0.1:34996 - 58390 "AAAA IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 146 0.000142915s
	[INFO] 10.244.0.1:45802 - 50322 "SVCB IN _dns.resolver.arpa. udp 36 false 512" NXDOMAIN qr,rd,ra 116 0.001437319s
	[INFO] 10.244.0.1:53767 - 58869 "A IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 104 0.000108041s
	[INFO] 10.244.0.1:41539 - 31179 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 104 0.000070791s
	[INFO] 10.244.0.1:13779 - 59581 "AAAA IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 146 0.000134498s
	
	* 
	* ==> coredns [dd4174f7048e] <==
	* [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:54701 - 49186 "HINFO IN 4433242266742195646.4506538270371499671. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.004797001s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* Name:               functional-818000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-818000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43
	                    minikube.k8s.io/name=functional-818000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_08_21T04_14_52_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 21 Aug 2023 11:14:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-818000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 21 Aug 2023 11:17:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 21 Aug 2023 11:17:21 +0000   Mon, 21 Aug 2023 11:14:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 21 Aug 2023 11:17:21 +0000   Mon, 21 Aug 2023 11:14:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 21 Aug 2023 11:17:21 +0000   Mon, 21 Aug 2023 11:14:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 21 Aug 2023 11:17:21 +0000   Mon, 21 Aug 2023 11:14:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.4
	  Hostname:    functional-818000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	System Info:
	  Machine ID:                 80a0ba57adf34f12aee9fd1cddbb5a96
	  System UUID:                80a0ba57adf34f12aee9fd1cddbb5a96
	  Boot ID:                    893766e1-a6b7-4e38-a882-5480b6d227a0
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.4
	  Kubelet Version:            v1.27.4
	  Kube-Proxy Version:         v1.27.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-7b684b55f9-w49wx                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         51s
	  default                     hello-node-connect-58d66798bb-j2x9r          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         32s
	  default                     nginx-svc                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         41s
	  default                     sp-pod                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22s
	  kube-system                 coredns-5d78c9869d-vf68m                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     2m32s
	  kube-system                 etcd-functional-818000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         2m45s
	  kube-system                 kube-apiserver-functional-818000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         77s
	  kube-system                 kube-controller-manager-functional-818000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m45s
	  kube-system                 kube-proxy-cln6c                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m32s
	  kube-system                 kube-scheduler-functional-818000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m45s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 2m30s              kube-proxy       
	  Normal  Starting                 76s                kube-proxy       
	  Normal  Starting                 119s               kube-proxy       
	  Normal  Starting                 2m45s              kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  2m45s              kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m45s              kubelet          Node functional-818000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m45s              kubelet          Node functional-818000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m45s              kubelet          Node functional-818000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                2m41s              kubelet          Node functional-818000 status is now: NodeReady
	  Normal  RegisteredNode           2m33s              node-controller  Node functional-818000 event: Registered Node functional-818000 in Controller
	  Normal  RegisteredNode           107s               node-controller  Node functional-818000 event: Registered Node functional-818000 in Controller
	  Normal  Starting                 81s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  81s (x8 over 81s)  kubelet          Node functional-818000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    81s (x8 over 81s)  kubelet          Node functional-818000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     81s (x7 over 81s)  kubelet          Node functional-818000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  81s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           65s                node-controller  Node functional-818000 event: Registered Node functional-818000 in Controller
	
	* 
	* ==> dmesg <==
	* [  +2.760461] systemd-fstab-generator[4136]: Ignoring "noauto" for root device
	[  +0.135977] systemd-fstab-generator[4170]: Ignoring "noauto" for root device
	[  +0.085554] systemd-fstab-generator[4181]: Ignoring "noauto" for root device
	[  +0.088851] systemd-fstab-generator[4194]: Ignoring "noauto" for root device
	[ +11.334907] systemd-fstab-generator[4750]: Ignoring "noauto" for root device
	[  +0.066381] systemd-fstab-generator[4761]: Ignoring "noauto" for root device
	[  +0.082930] systemd-fstab-generator[4772]: Ignoring "noauto" for root device
	[  +0.075046] systemd-fstab-generator[4783]: Ignoring "noauto" for root device
	[  +0.092584] systemd-fstab-generator[4855]: Ignoring "noauto" for root device
	[  +6.244532] kauditd_printk_skb: 34 callbacks suppressed
	[Aug21 11:16] systemd-fstab-generator[6612]: Ignoring "noauto" for root device
	[  +0.137903] systemd-fstab-generator[6645]: Ignoring "noauto" for root device
	[  +0.091583] systemd-fstab-generator[6656]: Ignoring "noauto" for root device
	[  +0.089464] systemd-fstab-generator[6669]: Ignoring "noauto" for root device
	[ +11.406217] systemd-fstab-generator[7223]: Ignoring "noauto" for root device
	[  +0.067575] systemd-fstab-generator[7234]: Ignoring "noauto" for root device
	[  +0.083368] systemd-fstab-generator[7245]: Ignoring "noauto" for root device
	[  +0.065961] systemd-fstab-generator[7256]: Ignoring "noauto" for root device
	[  +0.103324] systemd-fstab-generator[7328]: Ignoring "noauto" for root device
	[  +0.918669] systemd-fstab-generator[7581]: Ignoring "noauto" for root device
	[  +4.631514] kauditd_printk_skb: 29 callbacks suppressed
	[ +25.446547] kauditd_printk_skb: 11 callbacks suppressed
	[  +0.777824] TCP: eth0: Driver has suspect GRO implementation, TCP performance may be compromised.
	[ +12.873614] kauditd_printk_skb: 1 callbacks suppressed
	[Aug21 11:17] kauditd_printk_skb: 1 callbacks suppressed
	
	* 
	* ==> etcd [6f28b21a03ca] <==
	* {"level":"info","ts":"2023-08-21T11:15:35.459Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","added-peer-id":"7520ddf439b1d16","added-peer-peer-urls":["https://192.168.105.4:2380"]}
	{"level":"info","ts":"2023-08-21T11:15:35.459Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-21T11:15:35.459Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-21T11:15:35.462Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-08-21T11:15:35.462Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-08-21T11:15:36.756Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 2"}
	{"level":"info","ts":"2023-08-21T11:15:36.757Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-08-21T11:15:36.757Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 2"}
	{"level":"info","ts":"2023-08-21T11:15:36.757Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 3"}
	{"level":"info","ts":"2023-08-21T11:15:36.757Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2023-08-21T11:15:36.757Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 3"}
	{"level":"info","ts":"2023-08-21T11:15:36.757Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2023-08-21T11:15:36.762Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-818000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2023-08-21T11:15:36.762Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-08-21T11:15:36.762Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-08-21T11:15:36.764Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-08-21T11:15:36.764Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-08-21T11:15:36.765Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2023-08-21T11:15:36.765Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-08-21T11:16:03.556Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-08-21T11:16:03.556Z","caller":"embed/etcd.go:373","msg":"closing etcd server","name":"functional-818000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	{"level":"info","ts":"2023-08-21T11:16:03.565Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"7520ddf439b1d16","current-leader-member-id":"7520ddf439b1d16"}
	{"level":"info","ts":"2023-08-21T11:16:03.567Z","caller":"embed/etcd.go:568","msg":"stopping serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-08-21T11:16:03.569Z","caller":"embed/etcd.go:573","msg":"stopped serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-08-21T11:16:03.569Z","caller":"embed/etcd.go:375","msg":"closed etcd server","name":"functional-818000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	
	* 
	* ==> etcd [8ef2ca90579f] <==
	* {"level":"info","ts":"2023-08-21T11:16:17.491Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-08-21T11:16:17.491Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-08-21T11:16:17.491Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 switched to configuration voters=(527499358918876438)"}
	{"level":"info","ts":"2023-08-21T11:16:17.491Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","added-peer-id":"7520ddf439b1d16","added-peer-peer-urls":["https://192.168.105.4:2380"]}
	{"level":"info","ts":"2023-08-21T11:16:17.494Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-21T11:16:17.491Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-08-21T11:16:17.491Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-08-21T11:16:17.494Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-21T11:16:17.494Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"7520ddf439b1d16","initial-advertise-peer-urls":["https://192.168.105.4:2380"],"listen-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.105.4:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-08-21T11:16:17.494Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-08-21T11:16:17.494Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-08-21T11:16:19.347Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 3"}
	{"level":"info","ts":"2023-08-21T11:16:19.347Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 3"}
	{"level":"info","ts":"2023-08-21T11:16:19.347Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2023-08-21T11:16:19.347Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 4"}
	{"level":"info","ts":"2023-08-21T11:16:19.347Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2023-08-21T11:16:19.347Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 4"}
	{"level":"info","ts":"2023-08-21T11:16:19.347Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2023-08-21T11:16:19.352Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-08-21T11:16:19.352Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-08-21T11:16:19.355Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2023-08-21T11:16:19.355Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-08-21T11:16:19.352Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-818000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2023-08-21T11:16:19.355Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-08-21T11:16:19.356Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> kernel <==
	*  11:17:37 up 3 min,  0 users,  load average: 0.80, 0.33, 0.12
	Linux functional-818000 5.10.57 #1 SMP PREEMPT Fri Jul 14 22:49:12 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [36eb9587ac32] <==
	* I0821 11:16:20.068863       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0821 11:16:20.068909       1 apf_controller.go:366] Running API Priority and Fairness config worker
	I0821 11:16:20.068929       1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
	I0821 11:16:20.068986       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0821 11:16:20.069089       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0821 11:16:20.073080       1 shared_informer.go:318] Caches are synced for configmaps
	I0821 11:16:20.083579       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0821 11:16:20.083612       1 aggregator.go:152] initial CRD sync complete...
	I0821 11:16:20.083633       1 autoregister_controller.go:141] Starting autoregister controller
	I0821 11:16:20.083647       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0821 11:16:20.083654       1 cache.go:39] Caches are synced for autoregister controller
	I0821 11:16:20.843116       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0821 11:16:20.973608       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0821 11:16:21.125550       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0821 11:16:21.131778       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0821 11:16:21.149820       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0821 11:16:21.161302       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0821 11:16:21.164510       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0821 11:16:32.482107       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0821 11:16:32.556214       1 controller.go:624] quota admission added evaluator for: endpoints
	I0821 11:16:40.987449       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs=map[IPv4:10.107.229.152]
	I0821 11:16:46.111303       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0821 11:16:46.154714       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs=map[IPv4:10.104.98.201]
	I0821 11:16:56.221359       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs=map[IPv4:10.104.144.38]
	I0821 11:17:05.690456       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs=map[IPv4:10.106.81.98]
	
	* 
	* ==> kube-controller-manager [102aa0a24050] <==
	* I0821 11:15:50.129912       1 shared_informer.go:318] Caches are synced for namespace
	I0821 11:15:50.131049       1 shared_informer.go:318] Caches are synced for TTL
	I0821 11:15:50.133189       1 shared_informer.go:318] Caches are synced for deployment
	I0821 11:15:50.133497       1 shared_informer.go:318] Caches are synced for service account
	I0821 11:15:50.134310       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I0821 11:15:50.134341       1 shared_informer.go:318] Caches are synced for PV protection
	I0821 11:15:50.136493       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0821 11:15:50.188915       1 shared_informer.go:318] Caches are synced for taint
	I0821 11:15:50.189038       1 taint_manager.go:206] "Starting NoExecuteTaintManager"
	I0821 11:15:50.189055       1 taint_manager.go:211] "Sending events to api server"
	I0821 11:15:50.189246       1 node_lifecycle_controller.go:1223] "Initializing eviction metric for zone" zone=""
	I0821 11:15:50.189277       1 node_lifecycle_controller.go:875] "Missing timestamp for Node. Assuming now as a timestamp" node="functional-818000"
	I0821 11:15:50.189291       1 node_lifecycle_controller.go:1069] "Controller detected that zone is now in new state" zone="" newState=Normal
	I0821 11:15:50.189429       1 event.go:307] "Event occurred" object="functional-818000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node functional-818000 event: Registered Node functional-818000 in Controller"
	I0821 11:15:50.255850       1 shared_informer.go:318] Caches are synced for resource quota
	I0821 11:15:50.285872       1 shared_informer.go:318] Caches are synced for stateful set
	I0821 11:15:50.299459       1 shared_informer.go:318] Caches are synced for resource quota
	I0821 11:15:50.307990       1 shared_informer.go:318] Caches are synced for ephemeral
	I0821 11:15:50.310234       1 shared_informer.go:318] Caches are synced for PVC protection
	I0821 11:15:50.333023       1 shared_informer.go:318] Caches are synced for expand
	I0821 11:15:50.334233       1 shared_informer.go:318] Caches are synced for attach detach
	I0821 11:15:50.336432       1 shared_informer.go:318] Caches are synced for persistent volume
	I0821 11:15:50.663038       1 shared_informer.go:318] Caches are synced for garbage collector
	I0821 11:15:50.691158       1 shared_informer.go:318] Caches are synced for garbage collector
	I0821 11:15:50.691253       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	
	* 
	* ==> kube-controller-manager [f547063d88e1] <==
	* I0821 11:16:32.327402       1 shared_informer.go:318] Caches are synced for deployment
	I0821 11:16:32.327669       1 shared_informer.go:318] Caches are synced for service account
	I0821 11:16:32.329492       1 shared_informer.go:318] Caches are synced for job
	I0821 11:16:32.347858       1 shared_informer.go:318] Caches are synced for TTL
	I0821 11:16:32.353458       1 shared_informer.go:318] Caches are synced for HPA
	I0821 11:16:32.372872       1 shared_informer.go:318] Caches are synced for node
	I0821 11:16:32.372941       1 range_allocator.go:174] "Sending events to api server"
	I0821 11:16:32.372963       1 range_allocator.go:178] "Starting range CIDR allocator"
	I0821 11:16:32.372973       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I0821 11:16:32.372980       1 shared_informer.go:318] Caches are synced for cidrallocator
	I0821 11:16:32.397676       1 shared_informer.go:318] Caches are synced for disruption
	I0821 11:16:32.409734       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0821 11:16:32.476833       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0821 11:16:32.491898       1 shared_informer.go:318] Caches are synced for resource quota
	I0821 11:16:32.516724       1 shared_informer.go:318] Caches are synced for endpoint
	I0821 11:16:32.528543       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0821 11:16:32.557005       1 shared_informer.go:318] Caches are synced for resource quota
	I0821 11:16:32.873638       1 shared_informer.go:318] Caches are synced for garbage collector
	I0821 11:16:32.939958       1 shared_informer.go:318] Caches are synced for garbage collector
	I0821 11:16:32.940025       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0821 11:16:46.112742       1 event.go:307] "Event occurred" object="default/hello-node" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-node-7b684b55f9 to 1"
	I0821 11:16:46.120119       1 event.go:307] "Event occurred" object="default/hello-node-7b684b55f9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-node-7b684b55f9-w49wx"
	I0821 11:17:03.887752       1 event.go:307] "Event occurred" object="default/myclaim" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
	I0821 11:17:05.647614       1 event.go:307] "Event occurred" object="default/hello-node-connect" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-node-connect-58d66798bb to 1"
	I0821 11:17:05.650296       1 event.go:307] "Event occurred" object="default/hello-node-connect-58d66798bb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-node-connect-58d66798bb-j2x9r"
	
	* 
	* ==> kube-proxy [782163e35398] <==
	* I0821 11:15:37.505106       1 node.go:141] Successfully retrieved node IP: 192.168.105.4
	I0821 11:15:37.505188       1 server_others.go:110] "Detected node IP" address="192.168.105.4"
	I0821 11:15:37.505203       1 server_others.go:554] "Using iptables proxy"
	I0821 11:15:37.524259       1 server_others.go:178] "kube-proxy running in single-stack mode: secondary ipFamily is not supported" ipFamily=IPv6
	I0821 11:15:37.524268       1 server_others.go:192] "Using iptables Proxier"
	I0821 11:15:37.524282       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0821 11:15:37.524449       1 server.go:658] "Version info" version="v1.27.4"
	I0821 11:15:37.524454       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0821 11:15:37.525262       1 config.go:188] "Starting service config controller"
	I0821 11:15:37.526726       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0821 11:15:37.525372       1 config.go:97] "Starting endpoint slice config controller"
	I0821 11:15:37.526757       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0821 11:15:37.525529       1 config.go:315] "Starting node config controller"
	I0821 11:15:37.526771       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0821 11:15:37.526778       1 shared_informer.go:318] Caches are synced for node config
	I0821 11:15:37.526854       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0821 11:15:37.526966       1 shared_informer.go:318] Caches are synced for service config
	
	* 
	* ==> kube-proxy [e6bf09e2d248] <==
	* I0821 11:16:20.824784       1 node.go:141] Successfully retrieved node IP: 192.168.105.4
	I0821 11:16:20.824832       1 server_others.go:110] "Detected node IP" address="192.168.105.4"
	I0821 11:16:20.824858       1 server_others.go:554] "Using iptables proxy"
	I0821 11:16:20.836862       1 server_others.go:178] "kube-proxy running in single-stack mode: secondary ipFamily is not supported" ipFamily=IPv6
	I0821 11:16:20.836875       1 server_others.go:192] "Using iptables Proxier"
	I0821 11:16:20.836937       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0821 11:16:20.837136       1 server.go:658] "Version info" version="v1.27.4"
	I0821 11:16:20.837162       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0821 11:16:20.837455       1 config.go:188] "Starting service config controller"
	I0821 11:16:20.837465       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0821 11:16:20.837474       1 config.go:97] "Starting endpoint slice config controller"
	I0821 11:16:20.837494       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0821 11:16:20.837879       1 config.go:315] "Starting node config controller"
	I0821 11:16:20.837908       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0821 11:16:20.937925       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0821 11:16:20.937926       1 shared_informer.go:318] Caches are synced for service config
	I0821 11:16:20.937938       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [3b33366d1b18] <==
	* I0821 11:15:35.699228       1 serving.go:348] Generated self-signed cert in-memory
	W0821 11:15:37.478832       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0821 11:15:37.478943       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0821 11:15:37.478972       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0821 11:15:37.478986       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0821 11:15:37.499285       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.27.4"
	I0821 11:15:37.499396       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0821 11:15:37.500422       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0821 11:15:37.500525       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0821 11:15:37.500549       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0821 11:15:37.500571       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0821 11:15:37.601219       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0821 11:16:03.565672       1 scheduling_queue.go:1139] "Error while retrieving next pod from scheduling queue" err="scheduling queue is closed"
	I0821 11:16:03.565886       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	I0821 11:16:03.565898       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0821 11:16:03.565976       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kube-scheduler [f6178a0539ee] <==
	* I0821 11:16:17.704449       1 serving.go:348] Generated self-signed cert in-memory
	W0821 11:16:19.996591       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0821 11:16:19.996620       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0821 11:16:19.996625       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0821 11:16:19.996628       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0821 11:16:20.014325       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.27.4"
	I0821 11:16:20.014354       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0821 11:16:20.015207       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0821 11:16:20.015284       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0821 11:16:20.015395       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0821 11:16:20.015438       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0821 11:16:20.116278       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-08-21 11:14:35 UTC, ends at Mon 2023-08-21 11:17:37 UTC. --
	Aug 21 11:17:18 functional-818000 kubelet[7587]: I0821 11:17:18.254887    7587 scope.go:115] "RemoveContainer" containerID="1ad522d9074755e86772e762300369efc9699b4cd6aee36572603cb8156bee28"
	Aug 21 11:17:18 functional-818000 kubelet[7587]: E0821 11:17:18.255111    7587 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-7b684b55f9-w49wx_default(0409257d-b782-41cf-8fff-a8ed59b258e2)\"" pod="default/hello-node-7b684b55f9-w49wx" podUID=0409257d-b782-41cf-8fff-a8ed59b258e2
	Aug 21 11:17:20 functional-818000 kubelet[7587]: I0821 11:17:20.254886    7587 scope.go:115] "RemoveContainer" containerID="fb07530478aa36b1fbfe76e3988e4c829df4266681e15ba0e5bfacbfdf8466d7"
	Aug 21 11:17:21 functional-818000 kubelet[7587]: I0821 11:17:21.185041    7587 scope.go:115] "RemoveContainer" containerID="fb07530478aa36b1fbfe76e3988e4c829df4266681e15ba0e5bfacbfdf8466d7"
	Aug 21 11:17:21 functional-818000 kubelet[7587]: I0821 11:17:21.185355    7587 scope.go:115] "RemoveContainer" containerID="f8f4e6973c8f44f492533c96f99eb51ca0310187b2f78c701a22396b20f4a00f"
	Aug 21 11:17:21 functional-818000 kubelet[7587]: E0821 11:17:21.185521    7587 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-58d66798bb-j2x9r_default(2c452360-3344-4683-8627-4ae2bbe7a380)\"" pod="default/hello-node-connect-58d66798bb-j2x9r" podUID=2c452360-3344-4683-8627-4ae2bbe7a380
	Aug 21 11:17:24 functional-818000 kubelet[7587]: I0821 11:17:24.843750    7587 topology_manager.go:212] "Topology Admit Handler"
	Aug 21 11:17:25 functional-818000 kubelet[7587]: I0821 11:17:25.004095    7587 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7v66n\" (UniqueName: \"kubernetes.io/projected/bcb23c58-d07e-4a92-9c9b-c45c9b914521-kube-api-access-7v66n\") pod \"busybox-mount\" (UID: \"bcb23c58-d07e-4a92-9c9b-c45c9b914521\") " pod="default/busybox-mount"
	Aug 21 11:17:25 functional-818000 kubelet[7587]: I0821 11:17:25.004183    7587 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/bcb23c58-d07e-4a92-9c9b-c45c9b914521-test-volume\") pod \"busybox-mount\" (UID: \"bcb23c58-d07e-4a92-9c9b-c45c9b914521\") " pod="default/busybox-mount"
	Aug 21 11:17:25 functional-818000 kubelet[7587]: I0821 11:17:25.318815    7587 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc625d933ebf194c178cc7b93542ee6900d13b7de761e1c8686a6bd4502a27cb"
	Aug 21 11:17:28 functional-818000 kubelet[7587]: I0821 11:17:28.557193    7587 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7v66n\" (UniqueName: \"kubernetes.io/projected/bcb23c58-d07e-4a92-9c9b-c45c9b914521-kube-api-access-7v66n\") pod \"bcb23c58-d07e-4a92-9c9b-c45c9b914521\" (UID: \"bcb23c58-d07e-4a92-9c9b-c45c9b914521\") "
	Aug 21 11:17:28 functional-818000 kubelet[7587]: I0821 11:17:28.557602    7587 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/bcb23c58-d07e-4a92-9c9b-c45c9b914521-test-volume\") pod \"bcb23c58-d07e-4a92-9c9b-c45c9b914521\" (UID: \"bcb23c58-d07e-4a92-9c9b-c45c9b914521\") "
	Aug 21 11:17:28 functional-818000 kubelet[7587]: I0821 11:17:28.557628    7587 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bcb23c58-d07e-4a92-9c9b-c45c9b914521-test-volume" (OuterVolumeSpecName: "test-volume") pod "bcb23c58-d07e-4a92-9c9b-c45c9b914521" (UID: "bcb23c58-d07e-4a92-9c9b-c45c9b914521"). InnerVolumeSpecName "test-volume". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Aug 21 11:17:28 functional-818000 kubelet[7587]: I0821 11:17:28.558615    7587 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcb23c58-d07e-4a92-9c9b-c45c9b914521-kube-api-access-7v66n" (OuterVolumeSpecName: "kube-api-access-7v66n") pod "bcb23c58-d07e-4a92-9c9b-c45c9b914521" (UID: "bcb23c58-d07e-4a92-9c9b-c45c9b914521"). InnerVolumeSpecName "kube-api-access-7v66n". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 21 11:17:28 functional-818000 kubelet[7587]: I0821 11:17:28.659458    7587 reconciler_common.go:300] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/bcb23c58-d07e-4a92-9c9b-c45c9b914521-test-volume\") on node \"functional-818000\" DevicePath \"\""
	Aug 21 11:17:28 functional-818000 kubelet[7587]: I0821 11:17:28.659479    7587 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-7v66n\" (UniqueName: \"kubernetes.io/projected/bcb23c58-d07e-4a92-9c9b-c45c9b914521-kube-api-access-7v66n\") on node \"functional-818000\" DevicePath \"\""
	Aug 21 11:17:29 functional-818000 kubelet[7587]: I0821 11:17:29.254904    7587 scope.go:115] "RemoveContainer" containerID="1ad522d9074755e86772e762300369efc9699b4cd6aee36572603cb8156bee28"
	Aug 21 11:17:29 functional-818000 kubelet[7587]: I0821 11:17:29.397474    7587 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc625d933ebf194c178cc7b93542ee6900d13b7de761e1c8686a6bd4502a27cb"
	Aug 21 11:17:29 functional-818000 kubelet[7587]: I0821 11:17:29.408904    7587 scope.go:115] "RemoveContainer" containerID="bdfde4bb0dc4fedbf744b55befa3ef45273aaae65c77079b43d356e7a07c20ab"
	Aug 21 11:17:29 functional-818000 kubelet[7587]: E0821 11:17:29.409520    7587 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 40s restarting failed container=echoserver-arm pod=hello-node-7b684b55f9-w49wx_default(0409257d-b782-41cf-8fff-a8ed59b258e2)\"" pod="default/hello-node-7b684b55f9-w49wx" podUID=0409257d-b782-41cf-8fff-a8ed59b258e2
	Aug 21 11:17:30 functional-818000 kubelet[7587]: I0821 11:17:30.417300    7587 scope.go:115] "RemoveContainer" containerID="1ad522d9074755e86772e762300369efc9699b4cd6aee36572603cb8156bee28"
	Aug 21 11:17:30 functional-818000 kubelet[7587]: I0821 11:17:30.417473    7587 scope.go:115] "RemoveContainer" containerID="bdfde4bb0dc4fedbf744b55befa3ef45273aaae65c77079b43d356e7a07c20ab"
	Aug 21 11:17:30 functional-818000 kubelet[7587]: E0821 11:17:30.417557    7587 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 40s restarting failed container=echoserver-arm pod=hello-node-7b684b55f9-w49wx_default(0409257d-b782-41cf-8fff-a8ed59b258e2)\"" pod="default/hello-node-7b684b55f9-w49wx" podUID=0409257d-b782-41cf-8fff-a8ed59b258e2
	Aug 21 11:17:34 functional-818000 kubelet[7587]: I0821 11:17:34.254891    7587 scope.go:115] "RemoveContainer" containerID="f8f4e6973c8f44f492533c96f99eb51ca0310187b2f78c701a22396b20f4a00f"
	Aug 21 11:17:34 functional-818000 kubelet[7587]: E0821 11:17:34.255760    7587 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-58d66798bb-j2x9r_default(2c452360-3344-4683-8627-4ae2bbe7a380)\"" pod="default/hello-node-connect-58d66798bb-j2x9r" podUID=2c452360-3344-4683-8627-4ae2bbe7a380
	
	* 
	* ==> storage-provisioner [09358712b516] <==
	* I0821 11:16:20.822816       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0821 11:16:20.829375       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0821 11:16:20.829471       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0821 11:16:38.245734       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0821 11:16:38.246335       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-818000_930ebb04-7813-489c-8ac8-bebdf934ce68!
	I0821 11:16:38.250760       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"80c2a0ac-84a6-498b-bb60-35c7513702f4", APIVersion:"v1", ResourceVersion:"587", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-818000_930ebb04-7813-489c-8ac8-bebdf934ce68 became leader
	I0821 11:16:38.347124       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-818000_930ebb04-7813-489c-8ac8-bebdf934ce68!
	I0821 11:17:03.888157       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0821 11:17:03.888473       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"846b5820-bcd9-4aa1-84bb-0bba0ed9f151", APIVersion:"v1", ResourceVersion:"690", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0821 11:17:03.888234       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    9bd56560-7d5a-4160-a2f3-78f46767ab0e 353 0 2023-08-21 11:15:06 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2023-08-21 11:15:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-846b5820-bcd9-4aa1-84bb-0bba0ed9f151 &PersistentVolumeClaim{ObjectMeta:{myclaim  default  846b5820-bcd9-4aa1-84bb-0bba0ed9f151 690 0 2023-08-21 11:17:03 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2023-08-21 11:17:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2023-08-21 11:17:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0821 11:17:03.889005       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-846b5820-bcd9-4aa1-84bb-0bba0ed9f151" provisioned
	I0821 11:17:03.889055       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0821 11:17:03.889123       1 volume_store.go:212] Trying to save persistentvolume "pvc-846b5820-bcd9-4aa1-84bb-0bba0ed9f151"
	I0821 11:17:03.894006       1 volume_store.go:219] persistentvolume "pvc-846b5820-bcd9-4aa1-84bb-0bba0ed9f151" saved
	I0821 11:17:03.895293       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"846b5820-bcd9-4aa1-84bb-0bba0ed9f151", APIVersion:"v1", ResourceVersion:"690", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-846b5820-bcd9-4aa1-84bb-0bba0ed9f151
	
	* 
	* ==> storage-provisioner [668c90460f58] <==
	* I0821 11:15:35.308395       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0821 11:15:37.506691       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0821 11:15:37.506715       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0821 11:15:54.926520       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0821 11:15:54.926937       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-818000_a9fed7dc-d9c3-4e14-bb5d-d9304f089234!
	I0821 11:15:54.928862       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"80c2a0ac-84a6-498b-bb60-35c7513702f4", APIVersion:"v1", ResourceVersion:"489", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-818000_a9fed7dc-d9c3-4e14-bb5d-d9304f089234 became leader
	I0821 11:15:55.028449       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-818000_a9fed7dc-d9c3-4e14-bb5d-d9304f089234!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p functional-818000 -n functional-818000
helpers_test.go:261: (dbg) Run:  kubectl --context functional-818000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-818000 describe pod busybox-mount
helpers_test.go:282: (dbg) kubectl --context functional-818000 describe pod busybox-mount:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-818000/192.168.105.4
	Start Time:       Mon, 21 Aug 2023 04:17:24 -0700
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.12
	IPs:
	  IP:  10.244.0.12
	Containers:
	  mount-munger:
	    Container ID:  docker://87f5587b403fc195bbda61db7f70ef6c7b159c95a78d3a2dd977e5201f9e12ff
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 21 Aug 2023 04:17:26 -0700
	      Finished:     Mon, 21 Aug 2023 04:17:26 -0700
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7v66n (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-7v66n:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  12s   default-scheduler  Successfully assigned default/busybox-mount to functional-818000
	  Normal  Pulling    12s   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     11s   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.20305191s (1.203060868s including waiting)
	  Normal  Created    11s   kubelet            Created container mount-munger
	  Normal  Started    11s   kubelet            Started container mount-munger

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (32.16s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-818000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-818000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 80. stderr: I0821 04:16:55.907777    3123 out.go:296] Setting OutFile to fd 1 ...
I0821 04:16:55.908019    3123 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0821 04:16:55.908022    3123 out.go:309] Setting ErrFile to fd 2...
I0821 04:16:55.908024    3123 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0821 04:16:55.908137    3123 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17102-920/.minikube/bin
I0821 04:16:55.908313    3123 mustload.go:65] Loading cluster: functional-818000
I0821 04:16:55.908551    3123 config.go:182] Loaded profile config "functional-818000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
I0821 04:16:55.912199    3123 out.go:177] 
W0821 04:16:55.915303    3123 out.go:239] X Exiting due to GUEST_STATUS: Unable to get machine status: state: connect: dial unix /Users/jenkins/minikube-integration/17102-920/.minikube/machines/functional-818000/monitor: connect: connection refused
X Exiting due to GUEST_STATUS: Unable to get machine status: state: connect: dial unix /Users/jenkins/minikube-integration/17102-920/.minikube/machines/functional-818000/monitor: connect: connection refused
W0821 04:16:55.915308    3123 out.go:239] * 
* 
W0821 04:16:55.916696    3123 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                           │
│    * If the above advice does not help, please let us know:                                                               │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
│                                                                                                                           │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
│    * Please also attach the following file to the GitHub issue:                                                           │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_tunnel_7075cb44437691034d825beac909ba5df9688569_0.log    │
│                                                                                                                           │
╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                           │
│    * If the above advice does not help, please let us know:                                                               │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
│                                                                                                                           │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
│    * Please also attach the following file to the GitHub issue:                                                           │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_tunnel_7075cb44437691034d825beac909ba5df9688569_0.log    │
│                                                                                                                           │
╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0821 04:16:55.919159    3123 out.go:177] 

                                                
                                                
stdout: 

                                                
                                                
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-818000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-818000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-818000 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-818000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 3122: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-818000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-818000 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.17s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.07s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-925000
image_test.go:105: failed to pass build-args with args: "out/minikube-darwin-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-925000" : 
-- stdout --
	Sending build context to Docker daemon  2.048kB
	Step 1/5 : FROM gcr.io/google-containers/alpine-with-bash:1.0
	 ---> 822c13824dc2
	Step 2/5 : ARG ENV_A
	 ---> [Warning] The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
	 ---> Running in 37e934e0f458
	Removing intermediate container 37e934e0f458
	 ---> f5b31e98adf0
	Step 3/5 : ARG ENV_B
	 ---> [Warning] The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
	 ---> Running in a62d844241c7
	Removing intermediate container a62d844241c7
	 ---> 01abd0ceb543
	Step 4/5 : RUN echo "test-build-arg" $ENV_A $ENV_B
	 ---> [Warning] The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
	 ---> Running in 0e0332cd7d49
	exec /bin/sh: exec format error
	

                                                
                                                
-- /stdout --
** stderr ** 
	DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
	            Install the buildx component to build images with BuildKit:
	            https://docs.docker.com/go/buildx/
	
	The command '/bin/sh -c echo "test-build-arg" $ENV_A $ENV_B' returned a non-zero code: 1

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-925000 -n image-925000
helpers_test.go:244: <<< TestImageBuild/serial/BuildWithBuildArg FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestImageBuild/serial/BuildWithBuildArg]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p image-925000 logs -n 25
helpers_test.go:252: TestImageBuild/serial/BuildWithBuildArg logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|    Command     |                   Args                   |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|----------------|------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh            | functional-818000 ssh findmnt            | functional-818000 | jenkins | v1.31.2 | 21 Aug 23 04:17 PDT |                     |
	|                | -T /mount1                               |                   |         |         |                     |                     |
	| ssh            | functional-818000 ssh findmnt            | functional-818000 | jenkins | v1.31.2 | 21 Aug 23 04:17 PDT |                     |
	|                | -T /mount1                               |                   |         |         |                     |                     |
	| ssh            | functional-818000 ssh findmnt            | functional-818000 | jenkins | v1.31.2 | 21 Aug 23 04:17 PDT |                     |
	|                | -T /mount1                               |                   |         |         |                     |                     |
	| ssh            | functional-818000 ssh findmnt            | functional-818000 | jenkins | v1.31.2 | 21 Aug 23 04:17 PDT |                     |
	|                | -T /mount1                               |                   |         |         |                     |                     |
	| ssh            | functional-818000 ssh findmnt            | functional-818000 | jenkins | v1.31.2 | 21 Aug 23 04:17 PDT |                     |
	|                | -T /mount1                               |                   |         |         |                     |                     |
	| ssh            | functional-818000 ssh findmnt            | functional-818000 | jenkins | v1.31.2 | 21 Aug 23 04:17 PDT |                     |
	|                | -T /mount1                               |                   |         |         |                     |                     |
	| start          | -p functional-818000                     | functional-818000 | jenkins | v1.31.2 | 21 Aug 23 04:17 PDT |                     |
	|                | --dry-run --memory                       |                   |         |         |                     |                     |
	|                | 250MB --alsologtostderr                  |                   |         |         |                     |                     |
	|                | --driver=qemu2                           |                   |         |         |                     |                     |
	| start          | -p functional-818000 --dry-run           | functional-818000 | jenkins | v1.31.2 | 21 Aug 23 04:17 PDT |                     |
	|                | --alsologtostderr -v=1                   |                   |         |         |                     |                     |
	|                | --driver=qemu2                           |                   |         |         |                     |                     |
	| start          | -p functional-818000                     | functional-818000 | jenkins | v1.31.2 | 21 Aug 23 04:17 PDT |                     |
	|                | --dry-run --memory                       |                   |         |         |                     |                     |
	|                | 250MB --alsologtostderr                  |                   |         |         |                     |                     |
	|                | --driver=qemu2                           |                   |         |         |                     |                     |
	| dashboard      | --url --port 36195                       | functional-818000 | jenkins | v1.31.2 | 21 Aug 23 04:17 PDT | 21 Aug 23 04:17 PDT |
	|                | -p functional-818000                     |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                   |                   |         |         |                     |                     |
	| ssh            | functional-818000 ssh findmnt            | functional-818000 | jenkins | v1.31.2 | 21 Aug 23 04:17 PDT |                     |
	|                | -T /mount1                               |                   |         |         |                     |                     |
	| update-context | functional-818000                        | functional-818000 | jenkins | v1.31.2 | 21 Aug 23 04:17 PDT | 21 Aug 23 04:17 PDT |
	|                | update-context                           |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                   |                   |         |         |                     |                     |
	| update-context | functional-818000                        | functional-818000 | jenkins | v1.31.2 | 21 Aug 23 04:17 PDT | 21 Aug 23 04:17 PDT |
	|                | update-context                           |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                   |                   |         |         |                     |                     |
	| update-context | functional-818000                        | functional-818000 | jenkins | v1.31.2 | 21 Aug 23 04:17 PDT | 21 Aug 23 04:17 PDT |
	|                | update-context                           |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                   |                   |         |         |                     |                     |
	| image          | functional-818000                        | functional-818000 | jenkins | v1.31.2 | 21 Aug 23 04:17 PDT | 21 Aug 23 04:17 PDT |
	|                | image ls --format short                  |                   |         |         |                     |                     |
	|                | --alsologtostderr                        |                   |         |         |                     |                     |
	| image          | functional-818000                        | functional-818000 | jenkins | v1.31.2 | 21 Aug 23 04:17 PDT | 21 Aug 23 04:17 PDT |
	|                | image ls --format yaml                   |                   |         |         |                     |                     |
	|                | --alsologtostderr                        |                   |         |         |                     |                     |
	| image          | functional-818000                        | functional-818000 | jenkins | v1.31.2 | 21 Aug 23 04:17 PDT | 21 Aug 23 04:17 PDT |
	|                | image ls --format json                   |                   |         |         |                     |                     |
	|                | --alsologtostderr                        |                   |         |         |                     |                     |
	| image          | functional-818000                        | functional-818000 | jenkins | v1.31.2 | 21 Aug 23 04:17 PDT | 21 Aug 23 04:17 PDT |
	|                | image ls --format table                  |                   |         |         |                     |                     |
	|                | --alsologtostderr                        |                   |         |         |                     |                     |
	| ssh            | functional-818000 ssh pgrep              | functional-818000 | jenkins | v1.31.2 | 21 Aug 23 04:17 PDT |                     |
	|                | buildkitd                                |                   |         |         |                     |                     |
	| image          | functional-818000 image build -t         | functional-818000 | jenkins | v1.31.2 | 21 Aug 23 04:17 PDT | 21 Aug 23 04:17 PDT |
	|                | localhost/my-image:functional-818000     |                   |         |         |                     |                     |
	|                | testdata/build --alsologtostderr         |                   |         |         |                     |                     |
	| image          | functional-818000 image ls               | functional-818000 | jenkins | v1.31.2 | 21 Aug 23 04:17 PDT | 21 Aug 23 04:17 PDT |
	| delete         | -p functional-818000                     | functional-818000 | jenkins | v1.31.2 | 21 Aug 23 04:17 PDT | 21 Aug 23 04:17 PDT |
	| start          | -p image-925000 --driver=qemu2           | image-925000      | jenkins | v1.31.2 | 21 Aug 23 04:17 PDT | 21 Aug 23 04:18 PDT |
	|                |                                          |                   |         |         |                     |                     |
	| image          | build -t aaa:latest                      | image-925000      | jenkins | v1.31.2 | 21 Aug 23 04:18 PDT | 21 Aug 23 04:18 PDT |
	|                | ./testdata/image-build/test-normal       |                   |         |         |                     |                     |
	|                | -p image-925000                          |                   |         |         |                     |                     |
	| image          | build -t aaa:latest                      | image-925000      | jenkins | v1.31.2 | 21 Aug 23 04:18 PDT | 21 Aug 23 04:18 PDT |
	|                | --build-opt=build-arg=ENV_A=test_env_str |                   |         |         |                     |                     |
	|                | --build-opt=no-cache                     |                   |         |         |                     |                     |
	|                | ./testdata/image-build/test-arg -p       |                   |         |         |                     |                     |
	|                | image-925000                             |                   |         |         |                     |                     |
	|----------------|------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/21 04:17:47
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.20.7 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0821 04:17:47.179281    3344 out.go:296] Setting OutFile to fd 1 ...
	I0821 04:17:47.179395    3344 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 04:17:47.179396    3344 out.go:309] Setting ErrFile to fd 2...
	I0821 04:17:47.179398    3344 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 04:17:47.179534    3344 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17102-920/.minikube/bin
	I0821 04:17:47.180540    3344 out.go:303] Setting JSON to false
	I0821 04:17:47.195806    3344 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2841,"bootTime":1692613826,"procs":414,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0821 04:17:47.195868    3344 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0821 04:17:47.200550    3344 out.go:177] * [image-925000] minikube v1.31.2 on Darwin 13.5 (arm64)
	I0821 04:17:47.208466    3344 out.go:177]   - MINIKUBE_LOCATION=17102
	I0821 04:17:47.212420    3344 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17102-920/kubeconfig
	I0821 04:17:47.208497    3344 notify.go:220] Checking for updates...
	I0821 04:17:47.215489    3344 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0821 04:17:47.218394    3344 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0821 04:17:47.221460    3344 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17102-920/.minikube
	I0821 04:17:47.224327    3344 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0821 04:17:47.227609    3344 driver.go:373] Setting default libvirt URI to qemu:///system
	I0821 04:17:47.231416    3344 out.go:177] * Using the qemu2 driver based on user configuration
	I0821 04:17:47.237407    3344 start.go:298] selected driver: qemu2
	I0821 04:17:47.237410    3344 start.go:902] validating driver "qemu2" against <nil>
	I0821 04:17:47.237415    3344 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0821 04:17:47.237464    3344 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0821 04:17:47.240467    3344 out.go:177] * Automatically selected the socket_vmnet network
	I0821 04:17:47.243508    3344 start_flags.go:382] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0821 04:17:47.243637    3344 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0821 04:17:47.243657    3344 cni.go:84] Creating CNI manager for ""
	I0821 04:17:47.243662    3344 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0821 04:17:47.243666    3344 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0821 04:17:47.243671    3344 start_flags.go:319] config:
	{Name:image-925000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:image-925000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni
FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0821 04:17:47.247930    3344 iso.go:125] acquiring lock: {Name:mk813ea611542195bb0511881888be3fabc72ff6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 04:17:47.255455    3344 out.go:177] * Starting control plane node image-925000 in cluster image-925000
	I0821 04:17:47.258422    3344 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime docker
	I0821 04:17:47.258451    3344 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17102-920/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4
	I0821 04:17:47.258461    3344 cache.go:57] Caching tarball of preloaded images
	I0821 04:17:47.258520    3344 preload.go:174] Found /Users/jenkins/minikube-integration/17102-920/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0821 04:17:47.258523    3344 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.4 on docker
	I0821 04:17:47.258954    3344 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/image-925000/config.json ...
	I0821 04:17:47.258977    3344 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/image-925000/config.json: {Name:mk3fae65abc6ab3f8f212c6d749d750333e3e67c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 04:17:47.259181    3344 start.go:365] acquiring machines lock for image-925000: {Name:mk9b32d9fe994be32d77812db464b2cfa7bfb400 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0821 04:17:47.259211    3344 start.go:369] acquired machines lock for "image-925000" in 25.667µs
	I0821 04:17:47.259220    3344 start.go:93] Provisioning new machine with config: &{Name:image-925000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:i
mage-925000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:
false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0821 04:17:47.259267    3344 start.go:125] createHost starting for "" (driver="qemu2")
	I0821 04:17:47.263271    3344 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0821 04:17:47.284169    3344 start.go:159] libmachine.API.Create for "image-925000" (driver="qemu2")
	I0821 04:17:47.284194    3344 client.go:168] LocalClient.Create starting
	I0821 04:17:47.284268    3344 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca.pem
	I0821 04:17:47.284292    3344 main.go:141] libmachine: Decoding PEM data...
	I0821 04:17:47.284301    3344 main.go:141] libmachine: Parsing certificate...
	I0821 04:17:47.284339    3344 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17102-920/.minikube/certs/cert.pem
	I0821 04:17:47.284355    3344 main.go:141] libmachine: Decoding PEM data...
	I0821 04:17:47.284364    3344 main.go:141] libmachine: Parsing certificate...
	I0821 04:17:47.284660    3344 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17102-920/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17102-920/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0821 04:17:47.397833    3344 main.go:141] libmachine: Creating SSH key...
	I0821 04:17:47.636179    3344 main.go:141] libmachine: Creating Disk image...
	I0821 04:17:47.636185    3344 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0821 04:17:47.636373    3344 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17102-920/.minikube/machines/image-925000/disk.qcow2.raw /Users/jenkins/minikube-integration/17102-920/.minikube/machines/image-925000/disk.qcow2
	I0821 04:17:47.660400    3344 main.go:141] libmachine: STDOUT: 
	I0821 04:17:47.660429    3344 main.go:141] libmachine: STDERR: 
	I0821 04:17:47.660488    3344 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17102-920/.minikube/machines/image-925000/disk.qcow2 +20000M
	I0821 04:17:47.667885    3344 main.go:141] libmachine: STDOUT: Image resized.
	
	I0821 04:17:47.667895    3344 main.go:141] libmachine: STDERR: 
	I0821 04:17:47.667911    3344 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17102-920/.minikube/machines/image-925000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17102-920/.minikube/machines/image-925000/disk.qcow2
	I0821 04:17:47.667916    3344 main.go:141] libmachine: Starting QEMU VM...
	I0821 04:17:47.667958    3344 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17102-920/.minikube/machines/image-925000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/image-925000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17102-920/.minikube/machines/image-925000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:1f:ae:ac:c6:19 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17102-920/.minikube/machines/image-925000/disk.qcow2
	I0821 04:17:47.709465    3344 main.go:141] libmachine: STDOUT: 
	I0821 04:17:47.709489    3344 main.go:141] libmachine: STDERR: 
	I0821 04:17:47.709492    3344 main.go:141] libmachine: Attempt 0
	I0821 04:17:47.709505    3344 main.go:141] libmachine: Searching for b6:1f:ae:ac:c6:19 in /var/db/dhcpd_leases ...
	I0821 04:17:47.709581    3344 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0821 04:17:47.709595    3344 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:a:84:b8:5:75:ed ID:1,a:84:b8:5:75:ed Lease:0x64e4989b}
	I0821 04:17:47.709601    3344 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:8a:a1:22:f4:82:cf ID:1,8a:a1:22:f4:82:cf Lease:0x64e3470e}
	I0821 04:17:47.709605    3344 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:5e:15:38:20:81:6d ID:1,5e:15:38:20:81:6d Lease:0x64e48f18}
	I0821 04:17:49.711766    3344 main.go:141] libmachine: Attempt 1
	I0821 04:17:49.711812    3344 main.go:141] libmachine: Searching for b6:1f:ae:ac:c6:19 in /var/db/dhcpd_leases ...
	I0821 04:17:49.712148    3344 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0821 04:17:49.712191    3344 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:a:84:b8:5:75:ed ID:1,a:84:b8:5:75:ed Lease:0x64e4989b}
	I0821 04:17:49.712218    3344 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:8a:a1:22:f4:82:cf ID:1,8a:a1:22:f4:82:cf Lease:0x64e3470e}
	I0821 04:17:49.712246    3344 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:5e:15:38:20:81:6d ID:1,5e:15:38:20:81:6d Lease:0x64e48f18}
	I0821 04:17:51.712413    3344 main.go:141] libmachine: Attempt 2
	I0821 04:17:51.712427    3344 main.go:141] libmachine: Searching for b6:1f:ae:ac:c6:19 in /var/db/dhcpd_leases ...
	I0821 04:17:51.712576    3344 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0821 04:17:51.712603    3344 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:a:84:b8:5:75:ed ID:1,a:84:b8:5:75:ed Lease:0x64e4989b}
	I0821 04:17:51.712607    3344 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:8a:a1:22:f4:82:cf ID:1,8a:a1:22:f4:82:cf Lease:0x64e3470e}
	I0821 04:17:51.712612    3344 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:5e:15:38:20:81:6d ID:1,5e:15:38:20:81:6d Lease:0x64e48f18}
	I0821 04:17:53.714628    3344 main.go:141] libmachine: Attempt 3
	I0821 04:17:53.714633    3344 main.go:141] libmachine: Searching for b6:1f:ae:ac:c6:19 in /var/db/dhcpd_leases ...
	I0821 04:17:53.714756    3344 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0821 04:17:53.714773    3344 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:a:84:b8:5:75:ed ID:1,a:84:b8:5:75:ed Lease:0x64e4989b}
	I0821 04:17:53.714782    3344 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:8a:a1:22:f4:82:cf ID:1,8a:a1:22:f4:82:cf Lease:0x64e3470e}
	I0821 04:17:53.714786    3344 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:5e:15:38:20:81:6d ID:1,5e:15:38:20:81:6d Lease:0x64e48f18}
	I0821 04:17:55.715489    3344 main.go:141] libmachine: Attempt 4
	I0821 04:17:55.715494    3344 main.go:141] libmachine: Searching for b6:1f:ae:ac:c6:19 in /var/db/dhcpd_leases ...
	I0821 04:17:55.715547    3344 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0821 04:17:55.715553    3344 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:a:84:b8:5:75:ed ID:1,a:84:b8:5:75:ed Lease:0x64e4989b}
	I0821 04:17:55.715557    3344 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:8a:a1:22:f4:82:cf ID:1,8a:a1:22:f4:82:cf Lease:0x64e3470e}
	I0821 04:17:55.715562    3344 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:5e:15:38:20:81:6d ID:1,5e:15:38:20:81:6d Lease:0x64e48f18}
	I0821 04:17:57.717563    3344 main.go:141] libmachine: Attempt 5
	I0821 04:17:57.717572    3344 main.go:141] libmachine: Searching for b6:1f:ae:ac:c6:19 in /var/db/dhcpd_leases ...
	I0821 04:17:57.717664    3344 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0821 04:17:57.717673    3344 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:a:84:b8:5:75:ed ID:1,a:84:b8:5:75:ed Lease:0x64e4989b}
	I0821 04:17:57.717678    3344 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:8a:a1:22:f4:82:cf ID:1,8a:a1:22:f4:82:cf Lease:0x64e3470e}
	I0821 04:17:57.717686    3344 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:5e:15:38:20:81:6d ID:1,5e:15:38:20:81:6d Lease:0x64e48f18}
	I0821 04:17:59.719762    3344 main.go:141] libmachine: Attempt 6
	I0821 04:17:59.719782    3344 main.go:141] libmachine: Searching for b6:1f:ae:ac:c6:19 in /var/db/dhcpd_leases ...
	I0821 04:17:59.719937    3344 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0821 04:17:59.719957    3344 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:b6:1f:ae:ac:c6:19 ID:1,b6:1f:ae:ac:c6:19 Lease:0x64e49966}
	I0821 04:17:59.719963    3344 main.go:141] libmachine: Found match: b6:1f:ae:ac:c6:19
	I0821 04:17:59.719980    3344 main.go:141] libmachine: IP: 192.168.105.5
	I0821 04:17:59.719990    3344 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.5)...
	I0821 04:18:01.735516    3344 machine.go:88] provisioning docker machine ...
	I0821 04:18:01.735562    3344 buildroot.go:166] provisioning hostname "image-925000"
	I0821 04:18:01.735733    3344 main.go:141] libmachine: Using SSH client type: native
	I0821 04:18:01.736422    3344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10515e1e0] 0x105160c40 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0821 04:18:01.736436    3344 main.go:141] libmachine: About to run SSH command:
	sudo hostname image-925000 && echo "image-925000" | sudo tee /etc/hostname
	I0821 04:18:01.813610    3344 main.go:141] libmachine: SSH cmd err, output: <nil>: image-925000
	
	I0821 04:18:01.813741    3344 main.go:141] libmachine: Using SSH client type: native
	I0821 04:18:01.814224    3344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10515e1e0] 0x105160c40 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0821 04:18:01.814236    3344 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\simage-925000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 image-925000/g' /etc/hosts;
				else 
					echo '127.0.1.1 image-925000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0821 04:18:01.876493    3344 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0821 04:18:01.876522    3344 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17102-920/.minikube CaCertPath:/Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17102-920/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17102-920/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17102-920/.minikube}
	I0821 04:18:01.876545    3344 buildroot.go:174] setting up certificates
	I0821 04:18:01.876559    3344 provision.go:83] configureAuth start
	I0821 04:18:01.876563    3344 provision.go:138] copyHostCerts
	I0821 04:18:01.876681    3344 exec_runner.go:144] found /Users/jenkins/minikube-integration/17102-920/.minikube/ca.pem, removing ...
	I0821 04:18:01.876687    3344 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17102-920/.minikube/ca.pem
	I0821 04:18:01.876857    3344 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17102-920/.minikube/ca.pem (1078 bytes)
	I0821 04:18:01.877112    3344 exec_runner.go:144] found /Users/jenkins/minikube-integration/17102-920/.minikube/cert.pem, removing ...
	I0821 04:18:01.877114    3344 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17102-920/.minikube/cert.pem
	I0821 04:18:01.877187    3344 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17102-920/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17102-920/.minikube/cert.pem (1123 bytes)
	I0821 04:18:01.877344    3344 exec_runner.go:144] found /Users/jenkins/minikube-integration/17102-920/.minikube/key.pem, removing ...
	I0821 04:18:01.877346    3344 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17102-920/.minikube/key.pem
	I0821 04:18:01.877397    3344 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17102-920/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17102-920/.minikube/key.pem (1679 bytes)
	I0821 04:18:01.877503    3344 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17102-920/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca-key.pem org=jenkins.image-925000 san=[192.168.105.5 192.168.105.5 localhost 127.0.0.1 minikube image-925000]
	I0821 04:18:01.969183    3344 provision.go:172] copyRemoteCerts
	I0821 04:18:01.969218    3344 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0821 04:18:01.969224    3344 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/image-925000/id_rsa Username:docker}
	I0821 04:18:01.996842    3344 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0821 04:18:02.003521    3344 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0821 04:18:02.010661    3344 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0821 04:18:02.017696    3344 provision.go:86] duration metric: configureAuth took 141.127583ms
	I0821 04:18:02.017701    3344 buildroot.go:189] setting minikube options for container-runtime
	I0821 04:18:02.017804    3344 config.go:182] Loaded profile config "image-925000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0821 04:18:02.017846    3344 main.go:141] libmachine: Using SSH client type: native
	I0821 04:18:02.018059    3344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10515e1e0] 0x105160c40 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0821 04:18:02.018063    3344 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0821 04:18:02.068710    3344 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0821 04:18:02.068715    3344 buildroot.go:70] root file system type: tmpfs
	I0821 04:18:02.068767    3344 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0821 04:18:02.068817    3344 main.go:141] libmachine: Using SSH client type: native
	I0821 04:18:02.069055    3344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10515e1e0] 0x105160c40 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0821 04:18:02.069090    3344 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0821 04:18:02.124452    3344 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0821 04:18:02.124492    3344 main.go:141] libmachine: Using SSH client type: native
	I0821 04:18:02.124726    3344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10515e1e0] 0x105160c40 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0821 04:18:02.124733    3344 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0821 04:18:02.478711    3344 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0821 04:18:02.478719    3344 machine.go:91] provisioned docker machine in 743.197417ms
	I0821 04:18:02.478724    3344 client.go:171] LocalClient.Create took 15.194658166s
	I0821 04:18:02.478738    3344 start.go:167] duration metric: libmachine.API.Create for "image-925000" took 15.194702166s
	I0821 04:18:02.478741    3344 start.go:300] post-start starting for "image-925000" (driver="qemu2")
	I0821 04:18:02.478746    3344 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0821 04:18:02.478810    3344 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0821 04:18:02.478817    3344 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/image-925000/id_rsa Username:docker}
	I0821 04:18:02.505768    3344 ssh_runner.go:195] Run: cat /etc/os-release
	I0821 04:18:02.507014    3344 info.go:137] Remote host: Buildroot 2021.02.12
	I0821 04:18:02.507020    3344 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17102-920/.minikube/addons for local assets ...
	I0821 04:18:02.507080    3344 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17102-920/.minikube/files for local assets ...
	I0821 04:18:02.507176    3344 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17102-920/.minikube/files/etc/ssl/certs/13622.pem -> 13622.pem in /etc/ssl/certs
	I0821 04:18:02.507281    3344 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0821 04:18:02.509936    3344 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/files/etc/ssl/certs/13622.pem --> /etc/ssl/certs/13622.pem (1708 bytes)
	I0821 04:18:02.516594    3344 start.go:303] post-start completed in 37.849458ms
	I0821 04:18:02.516978    3344 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/image-925000/config.json ...
	I0821 04:18:02.517132    3344 start.go:128] duration metric: createHost completed in 15.257993083s
	I0821 04:18:02.517164    3344 main.go:141] libmachine: Using SSH client type: native
	I0821 04:18:02.517377    3344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10515e1e0] 0x105160c40 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0821 04:18:02.517379    3344 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0821 04:18:02.568033    3344 main.go:141] libmachine: SSH cmd err, output: <nil>: 1692616682.429983419
	
	I0821 04:18:02.568038    3344 fix.go:206] guest clock: 1692616682.429983419
	I0821 04:18:02.568041    3344 fix.go:219] Guest: 2023-08-21 04:18:02.429983419 -0700 PDT Remote: 2023-08-21 04:18:02.517135 -0700 PDT m=+15.357492543 (delta=-87.151581ms)
	I0821 04:18:02.568050    3344 fix.go:190] guest clock delta is within tolerance: -87.151581ms
	I0821 04:18:02.568051    3344 start.go:83] releasing machines lock for "image-925000", held for 15.308969208s
	I0821 04:18:02.568408    3344 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0821 04:18:02.568408    3344 ssh_runner.go:195] Run: cat /version.json
	I0821 04:18:02.568414    3344 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/image-925000/id_rsa Username:docker}
	I0821 04:18:02.568430    3344 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/image-925000/id_rsa Username:docker}
	I0821 04:18:02.596153    3344 ssh_runner.go:195] Run: systemctl --version
	I0821 04:18:02.637808    3344 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0821 04:18:02.639773    3344 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0821 04:18:02.639801    3344 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0821 04:18:02.645692    3344 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0821 04:18:02.645697    3344 start.go:466] detecting cgroup driver to use...
	I0821 04:18:02.645761    3344 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0821 04:18:02.651909    3344 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0821 04:18:02.655344    3344 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0821 04:18:02.658813    3344 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0821 04:18:02.658839    3344 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0821 04:18:02.661929    3344 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0821 04:18:02.664602    3344 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0821 04:18:02.667467    3344 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0821 04:18:02.670778    3344 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0821 04:18:02.673987    3344 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0821 04:18:02.676914    3344 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0821 04:18:02.679660    3344 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0821 04:18:02.682666    3344 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0821 04:18:02.747032    3344 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0821 04:18:02.756717    3344 start.go:466] detecting cgroup driver to use...
	I0821 04:18:02.756782    3344 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0821 04:18:02.762582    3344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0821 04:18:02.766866    3344 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0821 04:18:02.772392    3344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0821 04:18:02.777160    3344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0821 04:18:02.781805    3344 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0821 04:18:02.819517    3344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0821 04:18:02.825003    3344 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0821 04:18:02.830484    3344 ssh_runner.go:195] Run: which cri-dockerd
	I0821 04:18:02.831675    3344 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0821 04:18:02.834657    3344 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0821 04:18:02.839601    3344 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0821 04:18:02.917013    3344 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0821 04:18:02.994708    3344 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0821 04:18:02.994717    3344 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0821 04:18:03.000053    3344 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0821 04:18:03.078427    3344 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0821 04:18:04.236945    3344 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.158516125s)
	I0821 04:18:04.237009    3344 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0821 04:18:04.312264    3344 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0821 04:18:04.390660    3344 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0821 04:18:04.462432    3344 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0821 04:18:04.542252    3344 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0821 04:18:04.549248    3344 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0821 04:18:04.629144    3344 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0821 04:18:04.651683    3344 start.go:513] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0821 04:18:04.651759    3344 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0821 04:18:04.653840    3344 start.go:534] Will wait 60s for crictl version
	I0821 04:18:04.653874    3344 ssh_runner.go:195] Run: which crictl
	I0821 04:18:04.655362    3344 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0821 04:18:04.670439    3344 start.go:550] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.4
	RuntimeApiVersion:  v1alpha2
	I0821 04:18:04.670502    3344 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0821 04:18:04.680222    3344 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0821 04:18:04.695745    3344 out.go:204] * Preparing Kubernetes v1.27.4 on Docker 24.0.4 ...
	I0821 04:18:04.695869    3344 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0821 04:18:04.697381    3344 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0821 04:18:04.700918    3344 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime docker
	I0821 04:18:04.700957    3344 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0821 04:18:04.706113    3344 docker.go:636] Got preloaded images: 
	I0821 04:18:04.706118    3344 docker.go:642] registry.k8s.io/kube-apiserver:v1.27.4 wasn't preloaded
	I0821 04:18:04.706161    3344 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0821 04:18:04.709400    3344 ssh_runner.go:195] Run: which lz4
	I0821 04:18:04.710788    3344 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0821 04:18:04.712144    3344 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0821 04:18:04.712155    3344 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (343658271 bytes)
	I0821 04:18:05.999158    3344 docker.go:600] Took 1.288409 seconds to copy over tarball
	I0821 04:18:05.999222    3344 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0821 04:18:07.028721    3344 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.029494291s)
	I0821 04:18:07.028730    3344 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0821 04:18:07.043833    3344 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0821 04:18:07.047284    3344 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0821 04:18:07.052352    3344 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0821 04:18:07.129209    3344 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0821 04:18:08.597375    3344 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.468165708s)
	I0821 04:18:08.597476    3344 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0821 04:18:08.603465    3344 docker.go:636] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.4
	registry.k8s.io/kube-scheduler:v1.27.4
	registry.k8s.io/kube-controller-manager:v1.27.4
	registry.k8s.io/kube-proxy:v1.27.4
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0821 04:18:08.603471    3344 cache_images.go:84] Images are preloaded, skipping loading
	I0821 04:18:08.603522    3344 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0821 04:18:08.611188    3344 cni.go:84] Creating CNI manager for ""
	I0821 04:18:08.611193    3344 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0821 04:18:08.611207    3344 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0821 04:18:08.611215    3344 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.5 APIServerPort:8443 KubernetesVersion:v1.27.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:image-925000 NodeName:image-925000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0821 04:18:08.611271    3344 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "image-925000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0821 04:18:08.611302    3344 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=image-925000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.4 ClusterName:image-925000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0821 04:18:08.611350    3344 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.4
	I0821 04:18:08.614775    3344 binaries.go:44] Found k8s binaries, skipping transfer
	I0821 04:18:08.614802    3344 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0821 04:18:08.617792    3344 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I0821 04:18:08.622740    3344 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0821 04:18:08.627893    3344 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2098 bytes)
	I0821 04:18:08.632960    3344 ssh_runner.go:195] Run: grep 192.168.105.5	control-plane.minikube.internal$ /etc/hosts
	I0821 04:18:08.634212    3344 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.5	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0821 04:18:08.638009    3344 certs.go:56] Setting up /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/image-925000 for IP: 192.168.105.5
	I0821 04:18:08.638016    3344 certs.go:190] acquiring lock for shared ca certs: {Name:mkaf8bee91c9bef113528e728629bac5c142d5d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 04:18:08.638170    3344 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/17102-920/.minikube/ca.key
	I0821 04:18:08.638205    3344 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/17102-920/.minikube/proxy-client-ca.key
	I0821 04:18:08.638233    3344 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/image-925000/client.key
	I0821 04:18:08.638237    3344 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/image-925000/client.crt with IP's: []
	I0821 04:18:08.766926    3344 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/image-925000/client.crt ...
	I0821 04:18:08.766929    3344 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/image-925000/client.crt: {Name:mk47232ead5ba87d8c30e547fc948acc5e08a14c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 04:18:08.767166    3344 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/image-925000/client.key ...
	I0821 04:18:08.767168    3344 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/image-925000/client.key: {Name:mk6d54ce860feb20ff470a1144ca0305d6aac6c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 04:18:08.767276    3344 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/image-925000/apiserver.key.e69b33ca
	I0821 04:18:08.767282    3344 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/image-925000/apiserver.crt.e69b33ca with IP's: [192.168.105.5 10.96.0.1 127.0.0.1 10.0.0.1]
	I0821 04:18:08.861095    3344 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/image-925000/apiserver.crt.e69b33ca ...
	I0821 04:18:08.861097    3344 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/image-925000/apiserver.crt.e69b33ca: {Name:mke57439a7fd8fccb393c6002714ca6a67e66716 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 04:18:08.861230    3344 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/image-925000/apiserver.key.e69b33ca ...
	I0821 04:18:08.861232    3344 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/image-925000/apiserver.key.e69b33ca: {Name:mk890a9409803e02642836ddd47b58a83de1097e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 04:18:08.861331    3344 certs.go:337] copying /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/image-925000/apiserver.crt.e69b33ca -> /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/image-925000/apiserver.crt
	I0821 04:18:08.861535    3344 certs.go:341] copying /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/image-925000/apiserver.key.e69b33ca -> /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/image-925000/apiserver.key
	I0821 04:18:08.862073    3344 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/image-925000/proxy-client.key
	I0821 04:18:08.862099    3344 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/image-925000/proxy-client.crt with IP's: []
	I0821 04:18:08.896976    3344 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/image-925000/proxy-client.crt ...
	I0821 04:18:08.896979    3344 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/image-925000/proxy-client.crt: {Name:mk602a151e5c9dc337edf7015bd27d13a0cf0208 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 04:18:08.897116    3344 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/image-925000/proxy-client.key ...
	I0821 04:18:08.897118    3344 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/image-925000/proxy-client.key: {Name:mk52f6337750f451553bb1fd4eb852b8c5720bae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 04:18:08.897369    3344 certs.go:437] found cert: /Users/jenkins/minikube-integration/17102-920/.minikube/certs/Users/jenkins/minikube-integration/17102-920/.minikube/certs/1362.pem (1338 bytes)
	W0821 04:18:08.897397    3344 certs.go:433] ignoring /Users/jenkins/minikube-integration/17102-920/.minikube/certs/Users/jenkins/minikube-integration/17102-920/.minikube/certs/1362_empty.pem, impossibly tiny 0 bytes
	I0821 04:18:08.897402    3344 certs.go:437] found cert: /Users/jenkins/minikube-integration/17102-920/.minikube/certs/Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca-key.pem (1679 bytes)
	I0821 04:18:08.897419    3344 certs.go:437] found cert: /Users/jenkins/minikube-integration/17102-920/.minikube/certs/Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca.pem (1078 bytes)
	I0821 04:18:08.897435    3344 certs.go:437] found cert: /Users/jenkins/minikube-integration/17102-920/.minikube/certs/Users/jenkins/minikube-integration/17102-920/.minikube/certs/cert.pem (1123 bytes)
	I0821 04:18:08.897450    3344 certs.go:437] found cert: /Users/jenkins/minikube-integration/17102-920/.minikube/certs/Users/jenkins/minikube-integration/17102-920/.minikube/certs/key.pem (1679 bytes)
	I0821 04:18:08.897489    3344 certs.go:437] found cert: /Users/jenkins/minikube-integration/17102-920/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/17102-920/.minikube/files/etc/ssl/certs/13622.pem (1708 bytes)
	I0821 04:18:08.897770    3344 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/image-925000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0821 04:18:08.905167    3344 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/image-925000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0821 04:18:08.911809    3344 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/image-925000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0821 04:18:08.918999    3344 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/image-925000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0821 04:18:08.926477    3344 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0821 04:18:08.933543    3344 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0821 04:18:08.940216    3344 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0821 04:18:08.947086    3344 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0821 04:18:08.954404    3344 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0821 04:18:08.961365    3344 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/certs/1362.pem --> /usr/share/ca-certificates/1362.pem (1338 bytes)
	I0821 04:18:08.968171    3344 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/files/etc/ssl/certs/13622.pem --> /usr/share/ca-certificates/13622.pem (1708 bytes)
	I0821 04:18:08.975236    3344 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0821 04:18:08.980423    3344 ssh_runner.go:195] Run: openssl version
	I0821 04:18:08.982306    3344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1362.pem && ln -fs /usr/share/ca-certificates/1362.pem /etc/ssl/certs/1362.pem"
	I0821 04:18:08.985292    3344 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1362.pem
	I0821 04:18:08.986848    3344 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Aug 21 11:14 /usr/share/ca-certificates/1362.pem
	I0821 04:18:08.986866    3344 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1362.pem
	I0821 04:18:08.989060    3344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1362.pem /etc/ssl/certs/51391683.0"
	I0821 04:18:08.992016    3344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13622.pem && ln -fs /usr/share/ca-certificates/13622.pem /etc/ssl/certs/13622.pem"
	I0821 04:18:08.995409    3344 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13622.pem
	I0821 04:18:08.996882    3344 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Aug 21 11:14 /usr/share/ca-certificates/13622.pem
	I0821 04:18:08.996901    3344 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13622.pem
	I0821 04:18:08.998660    3344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/13622.pem /etc/ssl/certs/3ec20f2e.0"
	I0821 04:18:09.001731    3344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0821 04:18:09.004661    3344 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0821 04:18:09.005968    3344 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 21 10:34 /usr/share/ca-certificates/minikubeCA.pem
	I0821 04:18:09.005984    3344 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0821 04:18:09.007827    3344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0821 04:18:09.010980    3344 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0821 04:18:09.012381    3344 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0821 04:18:09.012409    3344 kubeadm.go:404] StartCluster: {Name:image-925000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:image-925000 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0821 04:18:09.012476    3344 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0821 04:18:09.017820    3344 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0821 04:18:09.020753    3344 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0821 04:18:09.023512    3344 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0821 04:18:09.026799    3344 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0821 04:18:09.026814    3344 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0821 04:18:09.049130    3344 kubeadm.go:322] [init] Using Kubernetes version: v1.27.4
	I0821 04:18:09.049151    3344 kubeadm.go:322] [preflight] Running pre-flight checks
	I0821 04:18:09.106660    3344 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0821 04:18:09.106716    3344 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0821 04:18:09.106766    3344 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0821 04:18:09.163661    3344 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0821 04:18:09.170867    3344 out.go:204]   - Generating certificates and keys ...
	I0821 04:18:09.170910    3344 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0821 04:18:09.170939    3344 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0821 04:18:09.268191    3344 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0821 04:18:09.449859    3344 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0821 04:18:09.579236    3344 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0821 04:18:09.680039    3344 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0821 04:18:09.780089    3344 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0821 04:18:09.780161    3344 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [image-925000 localhost] and IPs [192.168.105.5 127.0.0.1 ::1]
	I0821 04:18:09.913292    3344 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0821 04:18:09.913355    3344 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [image-925000 localhost] and IPs [192.168.105.5 127.0.0.1 ::1]
	I0821 04:18:09.976232    3344 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0821 04:18:10.121940    3344 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0821 04:18:10.182366    3344 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0821 04:18:10.182390    3344 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0821 04:18:10.263895    3344 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0821 04:18:10.308197    3344 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0821 04:18:10.388060    3344 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0821 04:18:10.481039    3344 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0821 04:18:10.487655    3344 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0821 04:18:10.488117    3344 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0821 04:18:10.488258    3344 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0821 04:18:10.560820    3344 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0821 04:18:10.567794    3344 out.go:204]   - Booting up control plane ...
	I0821 04:18:10.567846    3344 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0821 04:18:10.567878    3344 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0821 04:18:10.567906    3344 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0821 04:18:10.567980    3344 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0821 04:18:10.568062    3344 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0821 04:18:14.565551    3344 kubeadm.go:322] [apiclient] All control plane components are healthy after 4.003162 seconds
	I0821 04:18:14.565662    3344 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0821 04:18:14.574830    3344 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0821 04:18:15.089602    3344 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0821 04:18:15.089724    3344 kubeadm.go:322] [mark-control-plane] Marking the node image-925000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0821 04:18:15.599741    3344 kubeadm.go:322] [bootstrap-token] Using token: ra7zoq.356xejv8bnc93mg8
	I0821 04:18:15.604123    3344 out.go:204]   - Configuring RBAC rules ...
	I0821 04:18:15.604223    3344 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0821 04:18:15.606367    3344 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0821 04:18:15.611508    3344 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0821 04:18:15.613737    3344 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0821 04:18:15.615438    3344 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0821 04:18:15.617538    3344 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0821 04:18:15.623460    3344 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0821 04:18:15.817094    3344 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0821 04:18:16.008125    3344 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0821 04:18:16.008680    3344 kubeadm.go:322] 
	I0821 04:18:16.008714    3344 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0821 04:18:16.008716    3344 kubeadm.go:322] 
	I0821 04:18:16.008763    3344 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0821 04:18:16.008766    3344 kubeadm.go:322] 
	I0821 04:18:16.008777    3344 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0821 04:18:16.008810    3344 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0821 04:18:16.008833    3344 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0821 04:18:16.008835    3344 kubeadm.go:322] 
	I0821 04:18:16.008872    3344 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0821 04:18:16.008874    3344 kubeadm.go:322] 
	I0821 04:18:16.008897    3344 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0821 04:18:16.008899    3344 kubeadm.go:322] 
	I0821 04:18:16.008937    3344 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0821 04:18:16.008979    3344 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0821 04:18:16.009020    3344 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0821 04:18:16.009022    3344 kubeadm.go:322] 
	I0821 04:18:16.009070    3344 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0821 04:18:16.009111    3344 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0821 04:18:16.009113    3344 kubeadm.go:322] 
	I0821 04:18:16.009160    3344 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token ra7zoq.356xejv8bnc93mg8 \
	I0821 04:18:16.009214    3344 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:c361d9930575cb4141f86c9c696a425212668e350af0245a5e7de41b1bd48407 \
	I0821 04:18:16.009233    3344 kubeadm.go:322] 	--control-plane 
	I0821 04:18:16.009236    3344 kubeadm.go:322] 
	I0821 04:18:16.009281    3344 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0821 04:18:16.009283    3344 kubeadm.go:322] 
	I0821 04:18:16.009329    3344 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token ra7zoq.356xejv8bnc93mg8 \
	I0821 04:18:16.009392    3344 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:c361d9930575cb4141f86c9c696a425212668e350af0245a5e7de41b1bd48407 
	I0821 04:18:16.009453    3344 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0821 04:18:16.009462    3344 cni.go:84] Creating CNI manager for ""
	I0821 04:18:16.009468    3344 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0821 04:18:16.017607    3344 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0821 04:18:16.021597    3344 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0821 04:18:16.024626    3344 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0821 04:18:16.029162    3344 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0821 04:18:16.029207    3344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 04:18:16.029215    3344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43 minikube.k8s.io/name=image-925000 minikube.k8s.io/updated_at=2023_08_21T04_18_16_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 04:18:16.095556    3344 kubeadm.go:1081] duration metric: took 66.389042ms to wait for elevateKubeSystemPrivileges.
	I0821 04:18:16.095576    3344 ops.go:34] apiserver oom_adj: -16
	I0821 04:18:16.095578    3344 kubeadm.go:406] StartCluster complete in 7.083231791s
	I0821 04:18:16.095587    3344 settings.go:142] acquiring lock: {Name:mkeb461ec3a6a92ee32ce41e8df63d6759cb2728 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 04:18:16.095671    3344 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/17102-920/kubeconfig
	I0821 04:18:16.096046    3344 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17102-920/kubeconfig: {Name:mk2bc9c64ad130c36a0253707ac2ba3f8fd22371 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 04:18:16.096253    3344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0821 04:18:16.096300    3344 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0821 04:18:16.096334    3344 addons.go:69] Setting storage-provisioner=true in profile "image-925000"
	I0821 04:18:16.096340    3344 addons.go:231] Setting addon storage-provisioner=true in "image-925000"
	I0821 04:18:16.096357    3344 host.go:66] Checking if "image-925000" exists ...
	I0821 04:18:16.096362    3344 addons.go:69] Setting default-storageclass=true in profile "image-925000"
	I0821 04:18:16.096371    3344 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "image-925000"
	I0821 04:18:16.096429    3344 config.go:182] Loaded profile config "image-925000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0821 04:18:16.100625    3344 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0821 04:18:16.104680    3344 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0821 04:18:16.104683    3344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0821 04:18:16.104690    3344 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/image-925000/id_rsa Username:docker}
	I0821 04:18:16.109124    3344 addons.go:231] Setting addon default-storageclass=true in "image-925000"
	I0821 04:18:16.109138    3344 host.go:66] Checking if "image-925000" exists ...
	I0821 04:18:16.109794    3344 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0821 04:18:16.109798    3344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0821 04:18:16.109803    3344 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/image-925000/id_rsa Username:docker}
	I0821 04:18:16.112492    3344 kapi.go:248] "coredns" deployment in "kube-system" namespace and "image-925000" context rescaled to 1 replicas
	I0821 04:18:16.112507    3344 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0821 04:18:16.119564    3344 out.go:177] * Verifying Kubernetes components...
	I0821 04:18:16.122609    3344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0821 04:18:16.148691    3344 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0821 04:18:16.148693    3344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0821 04:18:16.149070    3344 api_server.go:52] waiting for apiserver process to appear ...
	I0821 04:18:16.149087    3344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0821 04:18:16.154088    3344 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0821 04:18:16.582861    3344 start.go:901] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I0821 04:18:16.582887    3344 api_server.go:72] duration metric: took 470.374833ms to wait for apiserver process to appear ...
	I0821 04:18:16.582891    3344 api_server.go:88] waiting for apiserver healthz status ...
	I0821 04:18:16.582903    3344 api_server.go:253] Checking apiserver healthz at https://192.168.105.5:8443/healthz ...
	I0821 04:18:16.586433    3344 api_server.go:279] https://192.168.105.5:8443/healthz returned 200:
	ok
	I0821 04:18:16.587183    3344 api_server.go:141] control plane version: v1.27.4
	I0821 04:18:16.587187    3344 api_server.go:131] duration metric: took 4.294792ms to wait for apiserver health ...
	I0821 04:18:16.587190    3344 system_pods.go:43] waiting for kube-system pods to appear ...
	I0821 04:18:16.589920    3344 system_pods.go:59] 4 kube-system pods found
	I0821 04:18:16.589926    3344 system_pods.go:61] "etcd-image-925000" [0ce8b747-4e01-4e5b-a31e-4963cac532d5] Pending
	I0821 04:18:16.589928    3344 system_pods.go:61] "kube-apiserver-image-925000" [d77f10ff-6a13-4746-9faa-397f9a243c76] Pending
	I0821 04:18:16.589930    3344 system_pods.go:61] "kube-controller-manager-image-925000" [25d23ca2-4f52-45ac-9755-6803855ee423] Pending
	I0821 04:18:16.589931    3344 system_pods.go:61] "kube-scheduler-image-925000" [57a3545f-2bf0-4266-9331-87788f5d7a0e] Pending
	I0821 04:18:16.589933    3344 system_pods.go:74] duration metric: took 2.741541ms to wait for pod list to return data ...
	I0821 04:18:16.589936    3344 kubeadm.go:581] duration metric: took 477.424292ms to wait for : map[apiserver:true system_pods:true] ...
	I0821 04:18:16.589941    3344 node_conditions.go:102] verifying NodePressure condition ...
	I0821 04:18:16.591260    3344 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0821 04:18:16.591268    3344 node_conditions.go:123] node cpu capacity is 2
	I0821 04:18:16.591280    3344 node_conditions.go:105] duration metric: took 1.336791ms to run NodePressure ...
	I0821 04:18:16.591283    3344 start.go:228] waiting for startup goroutines ...
	I0821 04:18:16.671451    3344 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0821 04:18:16.678266    3344 addons.go:502] enable addons completed in 581.985917ms: enabled=[default-storageclass storage-provisioner]
	I0821 04:18:16.678276    3344 start.go:233] waiting for cluster config update ...
	I0821 04:18:16.678280    3344 start.go:242] writing updated cluster config ...
	I0821 04:18:16.678516    3344 ssh_runner.go:195] Run: rm -f paused
	I0821 04:18:16.707285    3344 start.go:600] kubectl: 1.27.2, cluster: 1.27.4 (minor skew: 0)
	I0821 04:18:16.711212    3344 out.go:177] * Done! kubectl is now configured to use "image-925000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Mon 2023-08-21 11:17:58 UTC, ends at Mon 2023-08-21 11:18:18 UTC. --
	Aug 21 11:18:11 image-925000 dockerd[1114]: time="2023-08-21T11:18:11.447951131Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 21 11:18:11 image-925000 dockerd[1114]: time="2023-08-21T11:18:11.466655339Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 21 11:18:11 image-925000 dockerd[1114]: time="2023-08-21T11:18:11.466823089Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 21 11:18:11 image-925000 dockerd[1114]: time="2023-08-21T11:18:11.466844839Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 21 11:18:11 image-925000 dockerd[1114]: time="2023-08-21T11:18:11.466853756Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 21 11:18:11 image-925000 dockerd[1114]: time="2023-08-21T11:18:11.482205006Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 21 11:18:11 image-925000 dockerd[1114]: time="2023-08-21T11:18:11.482250381Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 21 11:18:11 image-925000 dockerd[1114]: time="2023-08-21T11:18:11.482261673Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 21 11:18:11 image-925000 dockerd[1114]: time="2023-08-21T11:18:11.482270131Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 21 11:18:11 image-925000 cri-dockerd[1004]: time="2023-08-21T11:18:11Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/81970f2fda9df4284012ac106849fa6890033b89285d77f402bcb07ef8c6280e/resolv.conf as [nameserver 192.168.105.1]"
	Aug 21 11:18:11 image-925000 dockerd[1114]: time="2023-08-21T11:18:11.554763631Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 21 11:18:11 image-925000 dockerd[1114]: time="2023-08-21T11:18:11.554838590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 21 11:18:11 image-925000 dockerd[1114]: time="2023-08-21T11:18:11.554851298Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 21 11:18:11 image-925000 dockerd[1114]: time="2023-08-21T11:18:11.554860173Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 21 11:18:17 image-925000 dockerd[1107]: time="2023-08-21T11:18:17.631928217Z" level=info msg="Layer sha256:5e5d01bb2a8d3e34816f24ff1a055b5d084e5a5a1919cd77684120916d61c3eb cleaned up"
	Aug 21 11:18:17 image-925000 dockerd[1107]: time="2023-08-21T11:18:17.753009592Z" level=info msg="Layer sha256:5e5d01bb2a8d3e34816f24ff1a055b5d084e5a5a1919cd77684120916d61c3eb cleaned up"
	Aug 21 11:18:17 image-925000 dockerd[1107]: time="2023-08-21T11:18:17.771500509Z" level=info msg="Layer sha256:5e5d01bb2a8d3e34816f24ff1a055b5d084e5a5a1919cd77684120916d61c3eb cleaned up"
	Aug 21 11:18:17 image-925000 dockerd[1114]: time="2023-08-21T11:18:17.813129051Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 21 11:18:17 image-925000 dockerd[1114]: time="2023-08-21T11:18:17.813158093Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 21 11:18:17 image-925000 dockerd[1114]: time="2023-08-21T11:18:17.813168176Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 21 11:18:17 image-925000 dockerd[1114]: time="2023-08-21T11:18:17.813174301Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 21 11:18:17 image-925000 dockerd[1107]: time="2023-08-21T11:18:17.956433926Z" level=info msg="ignoring event" container=0e0332cd7d49e834d8fe61d6a6358c9c4148ec781651cf674f2e917016de5885 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 21 11:18:17 image-925000 dockerd[1114]: time="2023-08-21T11:18:17.956538759Z" level=info msg="shim disconnected" id=0e0332cd7d49e834d8fe61d6a6358c9c4148ec781651cf674f2e917016de5885 namespace=moby
	Aug 21 11:18:17 image-925000 dockerd[1114]: time="2023-08-21T11:18:17.956663593Z" level=warning msg="cleaning up after shim disconnected" id=0e0332cd7d49e834d8fe61d6a6358c9c4148ec781651cf674f2e917016de5885 namespace=moby
	Aug 21 11:18:17 image-925000 dockerd[1114]: time="2023-08-21T11:18:17.956672968Z" level=info msg="cleaning up dead shim" namespace=moby
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	de2d78d0a5bf0       6eb63895cb67f       7 seconds ago       Running             kube-scheduler            0                   81970f2fda9df
	32885a917381d       389f6f052cf83       7 seconds ago       Running             kube-controller-manager   0                   2261ca2fb579c
	cb50fbebb5b0c       64aece92d6bde       7 seconds ago       Running             kube-apiserver            0                   7b19901e3d4f8
	4946c68a47a0e       24bc64e911039       7 seconds ago       Running             etcd                      0                   a5b44b06c1458
	
	* 
	* ==> describe nodes <==
	* Name:               image-925000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=image-925000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43
	                    minikube.k8s.io/name=image-925000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_08_21T04_18_16_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 21 Aug 2023 11:18:13 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  image-925000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 21 Aug 2023 11:18:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 21 Aug 2023 11:18:15 +0000   Mon, 21 Aug 2023 11:18:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 21 Aug 2023 11:18:15 +0000   Mon, 21 Aug 2023 11:18:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 21 Aug 2023 11:18:15 +0000   Mon, 21 Aug 2023 11:18:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Mon, 21 Aug 2023 11:18:15 +0000   Mon, 21 Aug 2023 11:18:12 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
	Addresses:
	  InternalIP:  192.168.105.5
	  Hostname:    image-925000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	System Info:
	  Machine ID:                 3b005846e5e2491a98d9f952c21f2b30
	  System UUID:                3b005846e5e2491a98d9f952c21f2b30
	  Boot ID:                    23d4980b-3d33-455d-ba15-31b066b6d0d3
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.4
	  Kubelet Version:            v1.27.4
	  Kube-Proxy Version:         v1.27.4
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-image-925000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         3s
	  kube-system                 kube-apiserver-image-925000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3s
	  kube-system                 kube-controller-manager-image-925000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3s
	  kube-system                 kube-scheduler-image-925000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             100Mi (2%!)(MISSING)  0 (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age              From     Message
	  ----    ------                   ----             ----     -------
	  Normal  Starting                 8s               kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  8s (x8 over 8s)  kubelet  Node image-925000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8s (x8 over 8s)  kubelet  Node image-925000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8s (x7 over 8s)  kubelet  Node image-925000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8s               kubelet  Updated Node Allocatable limit across pods
	  Normal  Starting                 3s               kubelet  Starting kubelet.
	  Normal  NodeAllocatableEnforced  3s               kubelet  Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  3s               kubelet  Node image-925000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3s               kubelet  Node image-925000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3s               kubelet  Node image-925000 status is now: NodeHasSufficientPID
	
	* 
	* ==> dmesg <==
	* [Aug21 11:17] ACPI: SRAT not present
	[  +0.000000] KASLR disabled due to lack of seed
	[  +0.664235] EINJ: EINJ table not found.
	[  +0.516897] systemd-fstab-generator[117]: Ignoring "noauto" for root device
	[  +0.043109] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000789] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[Aug21 11:18] systemd-fstab-generator[483]: Ignoring "noauto" for root device
	[  +0.082378] systemd-fstab-generator[494]: Ignoring "noauto" for root device
	[  +0.401437] systemd-fstab-generator[673]: Ignoring "noauto" for root device
	[  +0.170943] systemd-fstab-generator[711]: Ignoring "noauto" for root device
	[  +0.076418] systemd-fstab-generator[722]: Ignoring "noauto" for root device
	[  +0.086052] systemd-fstab-generator[735]: Ignoring "noauto" for root device
	[  +1.233779] systemd-fstab-generator[924]: Ignoring "noauto" for root device
	[  +0.076929] systemd-fstab-generator[935]: Ignoring "noauto" for root device
	[  +0.072994] systemd-fstab-generator[946]: Ignoring "noauto" for root device
	[  +0.080249] systemd-fstab-generator[957]: Ignoring "noauto" for root device
	[  +0.084767] systemd-fstab-generator[997]: Ignoring "noauto" for root device
	[  +2.500127] systemd-fstab-generator[1100]: Ignoring "noauto" for root device
	[  +1.446735] kauditd_printk_skb: 53 callbacks suppressed
	[  +1.979232] systemd-fstab-generator[1428]: Ignoring "noauto" for root device
	[  +5.155756] systemd-fstab-generator[2327]: Ignoring "noauto" for root device
	[  +2.251781] kauditd_printk_skb: 41 callbacks suppressed
	
	* 
	* ==> etcd [4946c68a47a0] <==
	* {"level":"info","ts":"2023-08-21T11:18:11.939Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"58de0efec1d86300","initial-advertise-peer-urls":["https://192.168.105.5:2380"],"listen-peer-urls":["https://192.168.105.5:2380"],"advertise-client-urls":["https://192.168.105.5:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.105.5:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-08-21T11:18:11.939Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-08-21T11:18:11.938Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.105.5:2380"}
	{"level":"info","ts":"2023-08-21T11:18:11.939Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.105.5:2380"}
	{"level":"info","ts":"2023-08-21T11:18:11.938Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 switched to configuration voters=(6403572207504089856)"}
	{"level":"info","ts":"2023-08-21T11:18:11.939Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"cd5c0afff2184bea","local-member-id":"58de0efec1d86300","added-peer-id":"58de0efec1d86300","added-peer-peer-urls":["https://192.168.105.5:2380"]}
	{"level":"info","ts":"2023-08-21T11:18:11.938Z","caller":"etcdserver/server.go:738","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"58de0efec1d86300","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2023-08-21T11:18:12.140Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 is starting a new election at term 1"}
	{"level":"info","ts":"2023-08-21T11:18:12.140Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-08-21T11:18:12.140Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 received MsgPreVoteResp from 58de0efec1d86300 at term 1"}
	{"level":"info","ts":"2023-08-21T11:18:12.140Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 became candidate at term 2"}
	{"level":"info","ts":"2023-08-21T11:18:12.140Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 received MsgVoteResp from 58de0efec1d86300 at term 2"}
	{"level":"info","ts":"2023-08-21T11:18:12.140Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 became leader at term 2"}
	{"level":"info","ts":"2023-08-21T11:18:12.140Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 58de0efec1d86300 elected leader 58de0efec1d86300 at term 2"}
	{"level":"info","ts":"2023-08-21T11:18:12.141Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-21T11:18:12.143Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"58de0efec1d86300","local-member-attributes":"{Name:image-925000 ClientURLs:[https://192.168.105.5:2379]}","request-path":"/0/members/58de0efec1d86300/attributes","cluster-id":"cd5c0afff2184bea","publish-timeout":"7s"}
	{"level":"info","ts":"2023-08-21T11:18:12.143Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-08-21T11:18:12.144Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.105.5:2379"}
	{"level":"info","ts":"2023-08-21T11:18:12.144Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-08-21T11:18:12.144Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-08-21T11:18:12.144Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-08-21T11:18:12.144Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-08-21T11:18:12.186Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"cd5c0afff2184bea","local-member-id":"58de0efec1d86300","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-21T11:18:12.186Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-21T11:18:12.186Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	
	* 
	* ==> kernel <==
	*  11:18:18 up 0 min,  0 users,  load average: 1.01, 0.23, 0.07
	Linux image-925000 5.10.57 #1 SMP PREEMPT Fri Jul 14 22:49:12 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [cb50fbebb5b0] <==
	* I0821 11:18:13.016690       1 shared_informer.go:318] Caches are synced for configmaps
	I0821 11:18:13.016776       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0821 11:18:13.016805       1 aggregator.go:152] initial CRD sync complete...
	I0821 11:18:13.016832       1 autoregister_controller.go:141] Starting autoregister controller
	I0821 11:18:13.016868       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0821 11:18:13.016888       1 cache.go:39] Caches are synced for autoregister controller
	I0821 11:18:13.016966       1 controller.go:624] quota admission added evaluator for: namespaces
	I0821 11:18:13.017164       1 apf_controller.go:366] Running API Priority and Fairness config worker
	I0821 11:18:13.017182       1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
	I0821 11:18:13.027152       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0821 11:18:13.030680       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0821 11:18:13.768625       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0821 11:18:13.928133       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0821 11:18:13.934273       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0821 11:18:13.934299       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0821 11:18:14.113014       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0821 11:18:14.122921       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0821 11:18:14.160779       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0821 11:18:14.164903       1 lease.go:251] Resetting endpoints for master service "kubernetes" to [192.168.105.5]
	I0821 11:18:14.165823       1 controller.go:624] quota admission added evaluator for: endpoints
	I0821 11:18:14.167841       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0821 11:18:14.947108       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0821 11:18:15.673952       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0821 11:18:15.678594       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0821 11:18:15.683095       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	
	* 
	* ==> kube-controller-manager [32885a917381] <==
	* I0821 11:18:17.196721       1 shared_informer.go:311] Waiting for caches to sync for bootstrap_signer
	I0821 11:18:17.345433       1 controllermanager.go:638] "Started controller" controller="root-ca-cert-publisher"
	I0821 11:18:17.345477       1 publisher.go:101] Starting root CA certificate configmap publisher
	I0821 11:18:17.345484       1 shared_informer.go:311] Waiting for caches to sync for crt configmap
	I0821 11:18:17.593407       1 controllermanager.go:638] "Started controller" controller="garbagecollector"
	I0821 11:18:17.593510       1 garbagecollector.go:155] "Starting controller" controller="garbagecollector"
	I0821 11:18:17.593523       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0821 11:18:17.593533       1 graph_builder.go:294] "Running" component="GraphBuilder"
	I0821 11:18:17.893349       1 controllermanager.go:638] "Started controller" controller="disruption"
	I0821 11:18:17.893383       1 disruption.go:423] Sending events to api server.
	I0821 11:18:17.893407       1 disruption.go:434] Starting disruption controller
	I0821 11:18:17.893411       1 shared_informer.go:311] Waiting for caches to sync for disruption
	I0821 11:18:18.045095       1 node_lifecycle_controller.go:431] "Controller will reconcile labels"
	I0821 11:18:18.045118       1 controllermanager.go:638] "Started controller" controller="nodelifecycle"
	I0821 11:18:18.045146       1 node_lifecycle_controller.go:465] "Sending events to api server"
	I0821 11:18:18.045154       1 node_lifecycle_controller.go:476] "Starting node controller"
	I0821 11:18:18.045157       1 shared_informer.go:311] Waiting for caches to sync for taint
	E0821 11:18:18.094543       1 core.go:213] "Failed to start cloud node lifecycle controller" err="no cloud provider provided"
	I0821 11:18:18.094558       1 controllermanager.go:616] "Warning: skipping controller" controller="cloud-node-lifecycle"
	I0821 11:18:18.245028       1 controllermanager.go:638] "Started controller" controller="ephemeral-volume"
	I0821 11:18:18.245059       1 controller.go:169] "Starting ephemeral volume controller"
	I0821 11:18:18.245064       1 shared_informer.go:311] Waiting for caches to sync for ephemeral
	I0821 11:18:18.399031       1 controllermanager.go:638] "Started controller" controller="podgc"
	I0821 11:18:18.399168       1 gc_controller.go:103] Starting GC controller
	I0821 11:18:18.399175       1 shared_informer.go:311] Waiting for caches to sync for GC
	
	* 
	* ==> kube-scheduler [de2d78d0a5bf] <==
	* W0821 11:18:12.986154       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0821 11:18:12.986181       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0821 11:18:12.986202       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0821 11:18:12.986209       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0821 11:18:12.986222       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0821 11:18:12.986226       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0821 11:18:12.985940       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0821 11:18:12.986232       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0821 11:18:12.986205       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0821 11:18:12.986258       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0821 11:18:12.986516       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0821 11:18:12.986524       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0821 11:18:13.852108       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0821 11:18:13.852139       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0821 11:18:13.940167       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0821 11:18:13.940228       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0821 11:18:13.958972       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0821 11:18:13.959005       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0821 11:18:13.993075       1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0821 11:18:13.993118       1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0821 11:18:14.013688       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0821 11:18:14.013812       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0821 11:18:14.040673       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0821 11:18:14.040732       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0821 11:18:17.183945       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-08-21 11:17:58 UTC, ends at Mon 2023-08-21 11:18:18 UTC. --
	Aug 21 11:18:15 image-925000 kubelet[2339]: I0821 11:18:15.827802    2339 topology_manager.go:212] "Topology Admit Handler"
	Aug 21 11:18:15 image-925000 kubelet[2339]: I0821 11:18:15.827820    2339 topology_manager.go:212] "Topology Admit Handler"
	Aug 21 11:18:15 image-925000 kubelet[2339]: I0821 11:18:15.827832    2339 topology_manager.go:212] "Topology Admit Handler"
	Aug 21 11:18:15 image-925000 kubelet[2339]: E0821 11:18:15.834421    2339 kubelet.go:1856] "Failed creating a mirror pod for" err="pods \"kube-scheduler-image-925000\" already exists" pod="kube-system/kube-scheduler-image-925000"
	Aug 21 11:18:16 image-925000 kubelet[2339]: I0821 11:18:16.014852    2339 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b932b4152f3188de48aa999e4c1f69ac-k8s-certs\") pod \"kube-controller-manager-image-925000\" (UID: \"b932b4152f3188de48aa999e4c1f69ac\") " pod="kube-system/kube-controller-manager-image-925000"
	Aug 21 11:18:16 image-925000 kubelet[2339]: I0821 11:18:16.014874    2339 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b932b4152f3188de48aa999e4c1f69ac-usr-share-ca-certificates\") pod \"kube-controller-manager-image-925000\" (UID: \"b932b4152f3188de48aa999e4c1f69ac\") " pod="kube-system/kube-controller-manager-image-925000"
	Aug 21 11:18:16 image-925000 kubelet[2339]: I0821 11:18:16.014886    2339 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/db888aaab4ea2c49ae944ba6f8a275b8-kubeconfig\") pod \"kube-scheduler-image-925000\" (UID: \"db888aaab4ea2c49ae944ba6f8a275b8\") " pod="kube-system/kube-scheduler-image-925000"
	Aug 21 11:18:16 image-925000 kubelet[2339]: I0821 11:18:16.014894    2339 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b932b4152f3188de48aa999e4c1f69ac-ca-certs\") pod \"kube-controller-manager-image-925000\" (UID: \"b932b4152f3188de48aa999e4c1f69ac\") " pod="kube-system/kube-controller-manager-image-925000"
	Aug 21 11:18:16 image-925000 kubelet[2339]: I0821 11:18:16.014906    2339 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b932b4152f3188de48aa999e4c1f69ac-flexvolume-dir\") pod \"kube-controller-manager-image-925000\" (UID: \"b932b4152f3188de48aa999e4c1f69ac\") " pod="kube-system/kube-controller-manager-image-925000"
	Aug 21 11:18:16 image-925000 kubelet[2339]: I0821 11:18:16.014916    2339 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cb27f185ffb0f650390cfabf1ade53d3-ca-certs\") pod \"kube-apiserver-image-925000\" (UID: \"cb27f185ffb0f650390cfabf1ade53d3\") " pod="kube-system/kube-apiserver-image-925000"
	Aug 21 11:18:16 image-925000 kubelet[2339]: I0821 11:18:16.014925    2339 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cb27f185ffb0f650390cfabf1ade53d3-k8s-certs\") pod \"kube-apiserver-image-925000\" (UID: \"cb27f185ffb0f650390cfabf1ade53d3\") " pod="kube-system/kube-apiserver-image-925000"
	Aug 21 11:18:16 image-925000 kubelet[2339]: I0821 11:18:16.014934    2339 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cb27f185ffb0f650390cfabf1ade53d3-usr-share-ca-certificates\") pod \"kube-apiserver-image-925000\" (UID: \"cb27f185ffb0f650390cfabf1ade53d3\") " pod="kube-system/kube-apiserver-image-925000"
	Aug 21 11:18:16 image-925000 kubelet[2339]: I0821 11:18:16.014945    2339 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b932b4152f3188de48aa999e4c1f69ac-kubeconfig\") pod \"kube-controller-manager-image-925000\" (UID: \"b932b4152f3188de48aa999e4c1f69ac\") " pod="kube-system/kube-controller-manager-image-925000"
	Aug 21 11:18:16 image-925000 kubelet[2339]: I0821 11:18:16.014953    2339 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/ffad5ca269762060adf682f57496de28-etcd-certs\") pod \"etcd-image-925000\" (UID: \"ffad5ca269762060adf682f57496de28\") " pod="kube-system/etcd-image-925000"
	Aug 21 11:18:16 image-925000 kubelet[2339]: I0821 11:18:16.014961    2339 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/ffad5ca269762060adf682f57496de28-etcd-data\") pod \"etcd-image-925000\" (UID: \"ffad5ca269762060adf682f57496de28\") " pod="kube-system/etcd-image-925000"
	Aug 21 11:18:16 image-925000 kubelet[2339]: I0821 11:18:16.703390    2339 apiserver.go:52] "Watching apiserver"
	Aug 21 11:18:16 image-925000 kubelet[2339]: I0821 11:18:16.714152    2339 desired_state_of_world_populator.go:153] "Finished populating initial desired state of world"
	Aug 21 11:18:16 image-925000 kubelet[2339]: I0821 11:18:16.721371    2339 reconciler.go:41] "Reconciler: start to sync state"
	Aug 21 11:18:16 image-925000 kubelet[2339]: E0821 11:18:16.758855    2339 kubelet.go:1856] "Failed creating a mirror pod for" err="pods \"kube-scheduler-image-925000\" already exists" pod="kube-system/kube-scheduler-image-925000"
	Aug 21 11:18:16 image-925000 kubelet[2339]: E0821 11:18:16.759960    2339 kubelet.go:1856] "Failed creating a mirror pod for" err="pods \"kube-apiserver-image-925000\" already exists" pod="kube-system/kube-apiserver-image-925000"
	Aug 21 11:18:16 image-925000 kubelet[2339]: E0821 11:18:16.760101    2339 kubelet.go:1856] "Failed creating a mirror pod for" err="pods \"etcd-image-925000\" already exists" pod="kube-system/etcd-image-925000"
	Aug 21 11:18:16 image-925000 kubelet[2339]: I0821 11:18:16.764217    2339 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-image-925000" podStartSLOduration=1.764196009 podCreationTimestamp="2023-08-21 11:18:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-08-21 11:18:16.760180675 +0000 UTC m=+1.098721752" watchObservedRunningTime="2023-08-21 11:18:16.764196009 +0000 UTC m=+1.102737085"
	Aug 21 11:18:16 image-925000 kubelet[2339]: I0821 11:18:16.767462    2339 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/etcd-image-925000" podStartSLOduration=1.767435634 podCreationTimestamp="2023-08-21 11:18:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-08-21 11:18:16.764257134 +0000 UTC m=+1.102798169" watchObservedRunningTime="2023-08-21 11:18:16.767435634 +0000 UTC m=+1.105976710"
	Aug 21 11:18:16 image-925000 kubelet[2339]: I0821 11:18:16.770589    2339 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-image-925000" podStartSLOduration=1.770576342 podCreationTimestamp="2023-08-21 11:18:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-08-21 11:18:16.767553675 +0000 UTC m=+1.106094752" watchObservedRunningTime="2023-08-21 11:18:16.770576342 +0000 UTC m=+1.109117419"
	Aug 21 11:18:16 image-925000 kubelet[2339]: I0821 11:18:16.774213    2339 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-image-925000" podStartSLOduration=2.774198092 podCreationTimestamp="2023-08-21 11:18:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-08-21 11:18:16.770688425 +0000 UTC m=+1.109229460" watchObservedRunningTime="2023-08-21 11:18:16.774198092 +0000 UTC m=+1.112739169"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p image-925000 -n image-925000
helpers_test.go:261: (dbg) Run:  kubectl --context image-925000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: storage-provisioner
helpers_test.go:274: ======> post-mortem[TestImageBuild/serial/BuildWithBuildArg]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context image-925000 describe pod storage-provisioner
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context image-925000 describe pod storage-provisioner: exit status 1 (36.711458ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context image-925000 describe pod storage-provisioner: exit status 1
--- FAIL: TestImageBuild/serial/BuildWithBuildArg (1.07s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (52.27s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-717000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:183: (dbg) Done: kubectl --context ingress-addon-legacy-717000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (14.816850833s)
addons_test.go:208: (dbg) Run:  kubectl --context ingress-addon-legacy-717000 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context ingress-addon-legacy-717000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [7b0f5b55-12ee-4904-9e40-8e306c800799] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [7b0f5b55-12ee-4904-9e40-8e306c800799] Running
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 10.012286041s
addons_test.go:238: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-717000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Run:  kubectl --context ingress-addon-legacy-717000 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-717000 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.105.6
addons_test.go:273: (dbg) Non-zero exit: nslookup hello-john.test 192.168.105.6: exit status 1 (15.032113334s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	

                                                
                                                
-- /stdout --
addons_test.go:275: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.105.6" : exit status 1
addons_test.go:279: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-717000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:282: (dbg) Done: out/minikube-darwin-arm64 -p ingress-addon-legacy-717000 addons disable ingress-dns --alsologtostderr -v=1: (4.0337145s)
addons_test.go:287: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-717000 addons disable ingress --alsologtostderr -v=1
addons_test.go:287: (dbg) Done: out/minikube-darwin-arm64 -p ingress-addon-legacy-717000 addons disable ingress --alsologtostderr -v=1: (7.107430167s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ingress-addon-legacy-717000 -n ingress-addon-legacy-717000
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-717000 logs -n 25
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                   Args                   |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| ssh            | functional-818000 ssh findmnt            | functional-818000           | jenkins | v1.31.2 | 21 Aug 23 04:17 PDT |                     |
	|                | -T /mount1                               |                             |         |         |                     |                     |
	| update-context | functional-818000                        | functional-818000           | jenkins | v1.31.2 | 21 Aug 23 04:17 PDT | 21 Aug 23 04:17 PDT |
	|                | update-context                           |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                   |                             |         |         |                     |                     |
	| update-context | functional-818000                        | functional-818000           | jenkins | v1.31.2 | 21 Aug 23 04:17 PDT | 21 Aug 23 04:17 PDT |
	|                | update-context                           |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                   |                             |         |         |                     |                     |
	| update-context | functional-818000                        | functional-818000           | jenkins | v1.31.2 | 21 Aug 23 04:17 PDT | 21 Aug 23 04:17 PDT |
	|                | update-context                           |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                   |                             |         |         |                     |                     |
	| image          | functional-818000                        | functional-818000           | jenkins | v1.31.2 | 21 Aug 23 04:17 PDT | 21 Aug 23 04:17 PDT |
	|                | image ls --format short                  |                             |         |         |                     |                     |
	|                | --alsologtostderr                        |                             |         |         |                     |                     |
	| image          | functional-818000                        | functional-818000           | jenkins | v1.31.2 | 21 Aug 23 04:17 PDT | 21 Aug 23 04:17 PDT |
	|                | image ls --format yaml                   |                             |         |         |                     |                     |
	|                | --alsologtostderr                        |                             |         |         |                     |                     |
	| image          | functional-818000                        | functional-818000           | jenkins | v1.31.2 | 21 Aug 23 04:17 PDT | 21 Aug 23 04:17 PDT |
	|                | image ls --format json                   |                             |         |         |                     |                     |
	|                | --alsologtostderr                        |                             |         |         |                     |                     |
	| image          | functional-818000                        | functional-818000           | jenkins | v1.31.2 | 21 Aug 23 04:17 PDT | 21 Aug 23 04:17 PDT |
	|                | image ls --format table                  |                             |         |         |                     |                     |
	|                | --alsologtostderr                        |                             |         |         |                     |                     |
	| ssh            | functional-818000 ssh pgrep              | functional-818000           | jenkins | v1.31.2 | 21 Aug 23 04:17 PDT |                     |
	|                | buildkitd                                |                             |         |         |                     |                     |
	| image          | functional-818000 image build -t         | functional-818000           | jenkins | v1.31.2 | 21 Aug 23 04:17 PDT | 21 Aug 23 04:17 PDT |
	|                | localhost/my-image:functional-818000     |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr         |                             |         |         |                     |                     |
	| image          | functional-818000 image ls               | functional-818000           | jenkins | v1.31.2 | 21 Aug 23 04:17 PDT | 21 Aug 23 04:17 PDT |
	| delete         | -p functional-818000                     | functional-818000           | jenkins | v1.31.2 | 21 Aug 23 04:17 PDT | 21 Aug 23 04:17 PDT |
	| start          | -p image-925000 --driver=qemu2           | image-925000                | jenkins | v1.31.2 | 21 Aug 23 04:17 PDT | 21 Aug 23 04:18 PDT |
	|                |                                          |                             |         |         |                     |                     |
	| image          | build -t aaa:latest                      | image-925000                | jenkins | v1.31.2 | 21 Aug 23 04:18 PDT | 21 Aug 23 04:18 PDT |
	|                | ./testdata/image-build/test-normal       |                             |         |         |                     |                     |
	|                | -p image-925000                          |                             |         |         |                     |                     |
	| image          | build -t aaa:latest                      | image-925000                | jenkins | v1.31.2 | 21 Aug 23 04:18 PDT | 21 Aug 23 04:18 PDT |
	|                | --build-opt=build-arg=ENV_A=test_env_str |                             |         |         |                     |                     |
	|                | --build-opt=no-cache                     |                             |         |         |                     |                     |
	|                | ./testdata/image-build/test-arg -p       |                             |         |         |                     |                     |
	|                | image-925000                             |                             |         |         |                     |                     |
	| image          | build -t aaa:latest                      | image-925000                | jenkins | v1.31.2 | 21 Aug 23 04:18 PDT | 21 Aug 23 04:18 PDT |
	|                | ./testdata/image-build/test-normal       |                             |         |         |                     |                     |
	|                | --build-opt=no-cache -p                  |                             |         |         |                     |                     |
	|                | image-925000                             |                             |         |         |                     |                     |
	| image          | build -t aaa:latest                      | image-925000                | jenkins | v1.31.2 | 21 Aug 23 04:18 PDT | 21 Aug 23 04:18 PDT |
	|                | -f inner/Dockerfile                      |                             |         |         |                     |                     |
	|                | ./testdata/image-build/test-f            |                             |         |         |                     |                     |
	|                | -p image-925000                          |                             |         |         |                     |                     |
	| delete         | -p image-925000                          | image-925000                | jenkins | v1.31.2 | 21 Aug 23 04:18 PDT | 21 Aug 23 04:18 PDT |
	| start          | -p ingress-addon-legacy-717000           | ingress-addon-legacy-717000 | jenkins | v1.31.2 | 21 Aug 23 04:18 PDT | 21 Aug 23 04:19 PDT |
	|                | --kubernetes-version=v1.18.20            |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true                |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                   |                             |         |         |                     |                     |
	|                | --driver=qemu2                           |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-717000              | ingress-addon-legacy-717000 | jenkins | v1.31.2 | 21 Aug 23 04:19 PDT | 21 Aug 23 04:19 PDT |
	|                | addons enable ingress                    |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                   |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-717000              | ingress-addon-legacy-717000 | jenkins | v1.31.2 | 21 Aug 23 04:19 PDT | 21 Aug 23 04:19 PDT |
	|                | addons enable ingress-dns                |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                   |                             |         |         |                     |                     |
	| ssh            | ingress-addon-legacy-717000              | ingress-addon-legacy-717000 | jenkins | v1.31.2 | 21 Aug 23 04:20 PDT | 21 Aug 23 04:20 PDT |
	|                | ssh curl -s http://127.0.0.1/            |                             |         |         |                     |                     |
	|                | -H 'Host: nginx.example.com'             |                             |         |         |                     |                     |
	| ip             | ingress-addon-legacy-717000 ip           | ingress-addon-legacy-717000 | jenkins | v1.31.2 | 21 Aug 23 04:20 PDT | 21 Aug 23 04:20 PDT |
	| addons         | ingress-addon-legacy-717000              | ingress-addon-legacy-717000 | jenkins | v1.31.2 | 21 Aug 23 04:20 PDT | 21 Aug 23 04:20 PDT |
	|                | addons disable ingress-dns               |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                   |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-717000              | ingress-addon-legacy-717000 | jenkins | v1.31.2 | 21 Aug 23 04:20 PDT | 21 Aug 23 04:20 PDT |
	|                | addons disable ingress                   |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                   |                             |         |         |                     |                     |
	|----------------|------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/21 04:18:19
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.20.7 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0821 04:18:19.300166    3381 out.go:296] Setting OutFile to fd 1 ...
	I0821 04:18:19.300284    3381 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 04:18:19.300287    3381 out.go:309] Setting ErrFile to fd 2...
	I0821 04:18:19.300289    3381 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 04:18:19.300388    3381 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17102-920/.minikube/bin
	I0821 04:18:19.301478    3381 out.go:303] Setting JSON to false
	I0821 04:18:19.316627    3381 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2873,"bootTime":1692613826,"procs":418,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0821 04:18:19.316714    3381 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0821 04:18:19.320584    3381 out.go:177] * [ingress-addon-legacy-717000] minikube v1.31.2 on Darwin 13.5 (arm64)
	I0821 04:18:19.332550    3381 notify.go:220] Checking for updates...
	I0821 04:18:19.336440    3381 out.go:177]   - MINIKUBE_LOCATION=17102
	I0821 04:18:19.339510    3381 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17102-920/kubeconfig
	I0821 04:18:19.342486    3381 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0821 04:18:19.345496    3381 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0821 04:18:19.348454    3381 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17102-920/.minikube
	I0821 04:18:19.351484    3381 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0821 04:18:19.352915    3381 driver.go:373] Setting default libvirt URI to qemu:///system
	I0821 04:18:19.356378    3381 out.go:177] * Using the qemu2 driver based on user configuration
	I0821 04:18:19.363299    3381 start.go:298] selected driver: qemu2
	I0821 04:18:19.363305    3381 start.go:902] validating driver "qemu2" against <nil>
	I0821 04:18:19.363322    3381 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0821 04:18:19.365403    3381 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0821 04:18:19.369487    3381 out.go:177] * Automatically selected the socket_vmnet network
	I0821 04:18:19.372599    3381 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0821 04:18:19.372627    3381 cni.go:84] Creating CNI manager for ""
	I0821 04:18:19.372636    3381 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0821 04:18:19.372647    3381 start_flags.go:319] config:
	{Name:ingress-addon-legacy-717000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-717000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentP
ID:0}
	I0821 04:18:19.376978    3381 iso.go:125] acquiring lock: {Name:mk813ea611542195bb0511881888be3fabc72ff6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 04:18:19.384507    3381 out.go:177] * Starting control plane node ingress-addon-legacy-717000 in cluster ingress-addon-legacy-717000
	I0821 04:18:19.388478    3381 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0821 04:18:19.441106    3381 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4
	I0821 04:18:19.441139    3381 cache.go:57] Caching tarball of preloaded images
	I0821 04:18:19.441303    3381 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0821 04:18:19.446543    3381 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0821 04:18:19.454484    3381 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I0821 04:18:19.532243    3381 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4?checksum=md5:c8c260b886393123ce9d312d8ac2379e -> /Users/jenkins/minikube-integration/17102-920/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4
	I0821 04:18:25.431010    3381 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I0821 04:18:25.431144    3381 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17102-920/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I0821 04:18:26.180142    3381 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on docker
	I0821 04:18:26.180347    3381 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/ingress-addon-legacy-717000/config.json ...
	I0821 04:18:26.180374    3381 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/ingress-addon-legacy-717000/config.json: {Name:mk2f6acb54389540f901c6ad0c8a1e9c4f871e2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 04:18:26.180625    3381 start.go:365] acquiring machines lock for ingress-addon-legacy-717000: {Name:mk9b32d9fe994be32d77812db464b2cfa7bfb400 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0821 04:18:26.180659    3381 start.go:369] acquired machines lock for "ingress-addon-legacy-717000" in 28.125µs
	I0821 04:18:26.180671    3381 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-717000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.
20 ClusterName:ingress-addon-legacy-717000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0821 04:18:26.180706    3381 start.go:125] createHost starting for "" (driver="qemu2")
	I0821 04:18:26.189723    3381 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I0821 04:18:26.204176    3381 start.go:159] libmachine.API.Create for "ingress-addon-legacy-717000" (driver="qemu2")
	I0821 04:18:26.204206    3381 client.go:168] LocalClient.Create starting
	I0821 04:18:26.204281    3381 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca.pem
	I0821 04:18:26.204307    3381 main.go:141] libmachine: Decoding PEM data...
	I0821 04:18:26.204320    3381 main.go:141] libmachine: Parsing certificate...
	I0821 04:18:26.204361    3381 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17102-920/.minikube/certs/cert.pem
	I0821 04:18:26.204380    3381 main.go:141] libmachine: Decoding PEM data...
	I0821 04:18:26.204389    3381 main.go:141] libmachine: Parsing certificate...
	I0821 04:18:26.204720    3381 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17102-920/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17102-920/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0821 04:18:26.329744    3381 main.go:141] libmachine: Creating SSH key...
	I0821 04:18:26.405798    3381 main.go:141] libmachine: Creating Disk image...
	I0821 04:18:26.405805    3381 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0821 04:18:26.405957    3381 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17102-920/.minikube/machines/ingress-addon-legacy-717000/disk.qcow2.raw /Users/jenkins/minikube-integration/17102-920/.minikube/machines/ingress-addon-legacy-717000/disk.qcow2
	I0821 04:18:26.414609    3381 main.go:141] libmachine: STDOUT: 
	I0821 04:18:26.414627    3381 main.go:141] libmachine: STDERR: 
	I0821 04:18:26.414695    3381 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17102-920/.minikube/machines/ingress-addon-legacy-717000/disk.qcow2 +20000M
	I0821 04:18:26.421912    3381 main.go:141] libmachine: STDOUT: Image resized.
	
	I0821 04:18:26.421927    3381 main.go:141] libmachine: STDERR: 
	I0821 04:18:26.421947    3381 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17102-920/.minikube/machines/ingress-addon-legacy-717000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17102-920/.minikube/machines/ingress-addon-legacy-717000/disk.qcow2
	I0821 04:18:26.421961    3381 main.go:141] libmachine: Starting QEMU VM...
	I0821 04:18:26.422003    3381 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4096 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17102-920/.minikube/machines/ingress-addon-legacy-717000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/ingress-addon-legacy-717000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17102-920/.minikube/machines/ingress-addon-legacy-717000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:81:1c:88:18:f5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17102-920/.minikube/machines/ingress-addon-legacy-717000/disk.qcow2
	I0821 04:18:26.456275    3381 main.go:141] libmachine: STDOUT: 
	I0821 04:18:26.456307    3381 main.go:141] libmachine: STDERR: 
	I0821 04:18:26.456311    3381 main.go:141] libmachine: Attempt 0
	I0821 04:18:26.456328    3381 main.go:141] libmachine: Searching for 2a:81:1c:88:18:f5 in /var/db/dhcpd_leases ...
	I0821 04:18:26.456397    3381 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0821 04:18:26.456418    3381 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:b6:1f:ae:ac:c6:19 ID:1,b6:1f:ae:ac:c6:19 Lease:0x64e49966}
	I0821 04:18:26.456426    3381 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:a:84:b8:5:75:ed ID:1,a:84:b8:5:75:ed Lease:0x64e4989b}
	I0821 04:18:26.456431    3381 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:8a:a1:22:f4:82:cf ID:1,8a:a1:22:f4:82:cf Lease:0x64e3470e}
	I0821 04:18:26.456436    3381 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:5e:15:38:20:81:6d ID:1,5e:15:38:20:81:6d Lease:0x64e48f18}
	I0821 04:18:28.458572    3381 main.go:141] libmachine: Attempt 1
	I0821 04:18:28.458651    3381 main.go:141] libmachine: Searching for 2a:81:1c:88:18:f5 in /var/db/dhcpd_leases ...
	I0821 04:18:28.459133    3381 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0821 04:18:28.459183    3381 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:b6:1f:ae:ac:c6:19 ID:1,b6:1f:ae:ac:c6:19 Lease:0x64e49966}
	I0821 04:18:28.459247    3381 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:a:84:b8:5:75:ed ID:1,a:84:b8:5:75:ed Lease:0x64e4989b}
	I0821 04:18:28.459293    3381 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:8a:a1:22:f4:82:cf ID:1,8a:a1:22:f4:82:cf Lease:0x64e3470e}
	I0821 04:18:28.459324    3381 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:5e:15:38:20:81:6d ID:1,5e:15:38:20:81:6d Lease:0x64e48f18}
	I0821 04:18:30.461458    3381 main.go:141] libmachine: Attempt 2
	I0821 04:18:30.461520    3381 main.go:141] libmachine: Searching for 2a:81:1c:88:18:f5 in /var/db/dhcpd_leases ...
	I0821 04:18:30.461621    3381 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0821 04:18:30.461635    3381 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:b6:1f:ae:ac:c6:19 ID:1,b6:1f:ae:ac:c6:19 Lease:0x64e49966}
	I0821 04:18:30.461640    3381 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:a:84:b8:5:75:ed ID:1,a:84:b8:5:75:ed Lease:0x64e4989b}
	I0821 04:18:30.461645    3381 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:8a:a1:22:f4:82:cf ID:1,8a:a1:22:f4:82:cf Lease:0x64e3470e}
	I0821 04:18:30.461650    3381 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:5e:15:38:20:81:6d ID:1,5e:15:38:20:81:6d Lease:0x64e48f18}
	I0821 04:18:32.463671    3381 main.go:141] libmachine: Attempt 3
	I0821 04:18:32.463682    3381 main.go:141] libmachine: Searching for 2a:81:1c:88:18:f5 in /var/db/dhcpd_leases ...
	I0821 04:18:32.463727    3381 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0821 04:18:32.463745    3381 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:b6:1f:ae:ac:c6:19 ID:1,b6:1f:ae:ac:c6:19 Lease:0x64e49966}
	I0821 04:18:32.463751    3381 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:a:84:b8:5:75:ed ID:1,a:84:b8:5:75:ed Lease:0x64e4989b}
	I0821 04:18:32.463756    3381 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:8a:a1:22:f4:82:cf ID:1,8a:a1:22:f4:82:cf Lease:0x64e3470e}
	I0821 04:18:32.463761    3381 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:5e:15:38:20:81:6d ID:1,5e:15:38:20:81:6d Lease:0x64e48f18}
	I0821 04:18:34.465783    3381 main.go:141] libmachine: Attempt 4
	I0821 04:18:34.465803    3381 main.go:141] libmachine: Searching for 2a:81:1c:88:18:f5 in /var/db/dhcpd_leases ...
	I0821 04:18:34.465857    3381 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0821 04:18:34.465869    3381 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:b6:1f:ae:ac:c6:19 ID:1,b6:1f:ae:ac:c6:19 Lease:0x64e49966}
	I0821 04:18:34.465874    3381 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:a:84:b8:5:75:ed ID:1,a:84:b8:5:75:ed Lease:0x64e4989b}
	I0821 04:18:34.465879    3381 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:8a:a1:22:f4:82:cf ID:1,8a:a1:22:f4:82:cf Lease:0x64e3470e}
	I0821 04:18:34.465884    3381 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:5e:15:38:20:81:6d ID:1,5e:15:38:20:81:6d Lease:0x64e48f18}
	I0821 04:18:36.467372    3381 main.go:141] libmachine: Attempt 5
	I0821 04:18:36.467446    3381 main.go:141] libmachine: Searching for 2a:81:1c:88:18:f5 in /var/db/dhcpd_leases ...
	I0821 04:18:36.467558    3381 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0821 04:18:36.467571    3381 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:b6:1f:ae:ac:c6:19 ID:1,b6:1f:ae:ac:c6:19 Lease:0x64e49966}
	I0821 04:18:36.467579    3381 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:a:84:b8:5:75:ed ID:1,a:84:b8:5:75:ed Lease:0x64e4989b}
	I0821 04:18:36.467584    3381 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:8a:a1:22:f4:82:cf ID:1,8a:a1:22:f4:82:cf Lease:0x64e3470e}
	I0821 04:18:36.467590    3381 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:5e:15:38:20:81:6d ID:1,5e:15:38:20:81:6d Lease:0x64e48f18}
	I0821 04:18:38.469674    3381 main.go:141] libmachine: Attempt 6
	I0821 04:18:38.469717    3381 main.go:141] libmachine: Searching for 2a:81:1c:88:18:f5 in /var/db/dhcpd_leases ...
	I0821 04:18:38.469844    3381 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I0821 04:18:38.469861    3381 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:2a:81:1c:88:18:f5 ID:1,2a:81:1c:88:18:f5 Lease:0x64e4998d}
	I0821 04:18:38.469866    3381 main.go:141] libmachine: Found match: 2a:81:1c:88:18:f5
	I0821 04:18:38.469877    3381 main.go:141] libmachine: IP: 192.168.105.6
	I0821 04:18:38.469883    3381 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.6)...
	I0821 04:18:39.477232    3381 machine.go:88] provisioning docker machine ...
	I0821 04:18:39.477255    3381 buildroot.go:166] provisioning hostname "ingress-addon-legacy-717000"
	I0821 04:18:39.477298    3381 main.go:141] libmachine: Using SSH client type: native
	I0821 04:18:39.477590    3381 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100f761e0] 0x100f78c40 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0821 04:18:39.477597    3381 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-717000 && echo "ingress-addon-legacy-717000" | sudo tee /etc/hostname
	I0821 04:18:39.499587    3381 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0821 04:18:42.606723    3381 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-717000
	
	I0821 04:18:42.606907    3381 main.go:141] libmachine: Using SSH client type: native
	I0821 04:18:42.607473    3381 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100f761e0] 0x100f78c40 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0821 04:18:42.607492    3381 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-717000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-717000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-717000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0821 04:18:42.687769    3381 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0821 04:18:42.687790    3381 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17102-920/.minikube CaCertPath:/Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17102-920/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17102-920/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17102-920/.minikube}
	I0821 04:18:42.687814    3381 buildroot.go:174] setting up certificates
	I0821 04:18:42.687829    3381 provision.go:83] configureAuth start
	I0821 04:18:42.687844    3381 provision.go:138] copyHostCerts
	I0821 04:18:42.687897    3381 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/17102-920/.minikube/ca.pem
	I0821 04:18:42.687974    3381 exec_runner.go:144] found /Users/jenkins/minikube-integration/17102-920/.minikube/ca.pem, removing ...
	I0821 04:18:42.687984    3381 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17102-920/.minikube/ca.pem
	I0821 04:18:42.688212    3381 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17102-920/.minikube/ca.pem (1078 bytes)
	I0821 04:18:42.688480    3381 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17102-920/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/17102-920/.minikube/cert.pem
	I0821 04:18:42.688521    3381 exec_runner.go:144] found /Users/jenkins/minikube-integration/17102-920/.minikube/cert.pem, removing ...
	I0821 04:18:42.688525    3381 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17102-920/.minikube/cert.pem
	I0821 04:18:42.688607    3381 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17102-920/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17102-920/.minikube/cert.pem (1123 bytes)
	I0821 04:18:42.688731    3381 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17102-920/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/17102-920/.minikube/key.pem
	I0821 04:18:42.688767    3381 exec_runner.go:144] found /Users/jenkins/minikube-integration/17102-920/.minikube/key.pem, removing ...
	I0821 04:18:42.688771    3381 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17102-920/.minikube/key.pem
	I0821 04:18:42.688875    3381 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17102-920/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17102-920/.minikube/key.pem (1679 bytes)
	I0821 04:18:42.689005    3381 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17102-920/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-717000 san=[192.168.105.6 192.168.105.6 localhost 127.0.0.1 minikube ingress-addon-legacy-717000]
	I0821 04:18:42.773541    3381 provision.go:172] copyRemoteCerts
	I0821 04:18:42.773575    3381 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0821 04:18:42.773584    3381 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/ingress-addon-legacy-717000/id_rsa Username:docker}
	I0821 04:18:42.807172    3381 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0821 04:18:42.807218    3381 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0821 04:18:42.813853    3381 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17102-920/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0821 04:18:42.813893    3381 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I0821 04:18:42.820949    3381 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17102-920/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0821 04:18:42.820987    3381 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0821 04:18:42.828417    3381 provision.go:86] duration metric: configureAuth took 140.572459ms
	I0821 04:18:42.828424    3381 buildroot.go:189] setting minikube options for container-runtime
	I0821 04:18:42.828519    3381 config.go:182] Loaded profile config "ingress-addon-legacy-717000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0821 04:18:42.828552    3381 main.go:141] libmachine: Using SSH client type: native
	I0821 04:18:42.828764    3381 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100f761e0] 0x100f78c40 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0821 04:18:42.828775    3381 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0821 04:18:42.890437    3381 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0821 04:18:42.890445    3381 buildroot.go:70] root file system type: tmpfs
	I0821 04:18:42.890505    3381 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0821 04:18:42.890556    3381 main.go:141] libmachine: Using SSH client type: native
	I0821 04:18:42.890790    3381 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100f761e0] 0x100f78c40 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0821 04:18:42.890823    3381 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0821 04:18:42.957060    3381 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0821 04:18:42.957107    3381 main.go:141] libmachine: Using SSH client type: native
	I0821 04:18:42.957366    3381 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100f761e0] 0x100f78c40 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0821 04:18:42.957378    3381 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0821 04:18:43.317777    3381 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0821 04:18:43.317792    3381 machine.go:91] provisioned docker machine in 3.840582125s
	I0821 04:18:43.317804    3381 client.go:171] LocalClient.Create took 17.113733666s
	I0821 04:18:43.317820    3381 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-717000" took 17.113794167s
	I0821 04:18:43.317828    3381 start.go:300] post-start starting for "ingress-addon-legacy-717000" (driver="qemu2")
	I0821 04:18:43.317834    3381 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0821 04:18:43.317900    3381 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0821 04:18:43.317910    3381 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/ingress-addon-legacy-717000/id_rsa Username:docker}
	I0821 04:18:43.351520    3381 ssh_runner.go:195] Run: cat /etc/os-release
	I0821 04:18:43.352813    3381 info.go:137] Remote host: Buildroot 2021.02.12
	I0821 04:18:43.352824    3381 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17102-920/.minikube/addons for local assets ...
	I0821 04:18:43.352904    3381 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17102-920/.minikube/files for local assets ...
	I0821 04:18:43.353008    3381 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17102-920/.minikube/files/etc/ssl/certs/13622.pem -> 13622.pem in /etc/ssl/certs
	I0821 04:18:43.353013    3381 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17102-920/.minikube/files/etc/ssl/certs/13622.pem -> /etc/ssl/certs/13622.pem
	I0821 04:18:43.353122    3381 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0821 04:18:43.356071    3381 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/files/etc/ssl/certs/13622.pem --> /etc/ssl/certs/13622.pem (1708 bytes)
	I0821 04:18:43.363365    3381 start.go:303] post-start completed in 45.531041ms
	I0821 04:18:43.363759    3381 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/ingress-addon-legacy-717000/config.json ...
	I0821 04:18:43.363916    3381 start.go:128] duration metric: createHost completed in 17.183354334s
	I0821 04:18:43.363945    3381 main.go:141] libmachine: Using SSH client type: native
	I0821 04:18:43.364166    3381 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100f761e0] 0x100f78c40 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0821 04:18:43.364174    3381 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0821 04:18:43.422499    3381 main.go:141] libmachine: SSH cmd err, output: <nil>: 1692616723.555649836
	
	I0821 04:18:43.422506    3381 fix.go:206] guest clock: 1692616723.555649836
	I0821 04:18:43.422510    3381 fix.go:219] Guest: 2023-08-21 04:18:43.555649836 -0700 PDT Remote: 2023-08-21 04:18:43.363919 -0700 PDT m=+24.083440043 (delta=191.730836ms)
	I0821 04:18:43.422525    3381 fix.go:190] guest clock delta is within tolerance: 191.730836ms
	I0821 04:18:43.422560    3381 start.go:83] releasing machines lock for "ingress-addon-legacy-717000", held for 17.242043083s
	I0821 04:18:43.422839    3381 ssh_runner.go:195] Run: cat /version.json
	I0821 04:18:43.422847    3381 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0821 04:18:43.422854    3381 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/ingress-addon-legacy-717000/id_rsa Username:docker}
	I0821 04:18:43.422867    3381 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/ingress-addon-legacy-717000/id_rsa Username:docker}
	I0821 04:18:43.457519    3381 ssh_runner.go:195] Run: systemctl --version
	I0821 04:18:43.498231    3381 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0821 04:18:43.500245    3381 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0821 04:18:43.500275    3381 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0821 04:18:43.503983    3381 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0821 04:18:43.509439    3381 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0821 04:18:43.509446    3381 start.go:466] detecting cgroup driver to use...
	I0821 04:18:43.509523    3381 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0821 04:18:43.516984    3381 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0821 04:18:43.520256    3381 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0821 04:18:43.523246    3381 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0821 04:18:43.523269    3381 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0821 04:18:43.526015    3381 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0821 04:18:43.529203    3381 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0821 04:18:43.532557    3381 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0821 04:18:43.536065    3381 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0821 04:18:43.539060    3381 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0821 04:18:43.541775    3381 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0821 04:18:43.544798    3381 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0821 04:18:43.547892    3381 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0821 04:18:43.623221    3381 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0821 04:18:43.630920    3381 start.go:466] detecting cgroup driver to use...
	I0821 04:18:43.631014    3381 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0821 04:18:43.641734    3381 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0821 04:18:43.649141    3381 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0821 04:18:43.660759    3381 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0821 04:18:43.666063    3381 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0821 04:18:43.670945    3381 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0821 04:18:43.708698    3381 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0821 04:18:43.714057    3381 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0821 04:18:43.719216    3381 ssh_runner.go:195] Run: which cri-dockerd
	I0821 04:18:43.720745    3381 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0821 04:18:43.723724    3381 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0821 04:18:43.728756    3381 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0821 04:18:43.806012    3381 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0821 04:18:43.882748    3381 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0821 04:18:43.882762    3381 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0821 04:18:43.888380    3381 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0821 04:18:43.967676    3381 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0821 04:18:45.132983    3381 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.165302458s)
	I0821 04:18:45.133051    3381 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0821 04:18:45.142700    3381 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0821 04:18:45.154219    3381 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 24.0.4 ...
	I0821 04:18:45.154354    3381 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0821 04:18:45.155729    3381 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0821 04:18:45.159532    3381 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0821 04:18:45.159574    3381 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0821 04:18:45.164691    3381 docker.go:636] Got preloaded images: 
	I0821 04:18:45.164698    3381 docker.go:642] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I0821 04:18:45.164734    3381 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0821 04:18:45.168023    3381 ssh_runner.go:195] Run: which lz4
	I0821 04:18:45.169238    3381 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17102-920/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 -> /preloaded.tar.lz4
	I0821 04:18:45.169320    3381 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0821 04:18:45.170815    3381 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0821 04:18:45.170835    3381 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (459739018 bytes)
	I0821 04:18:46.849888    3381 docker.go:600] Took 1.680620 seconds to copy over tarball
	I0821 04:18:46.849952    3381 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0821 04:18:48.135949    3381 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.285991375s)
	I0821 04:18:48.135961    3381 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0821 04:18:48.160989    3381 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0821 04:18:48.165265    3381 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2502 bytes)
	I0821 04:18:48.171644    3381 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0821 04:18:48.265537    3381 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0821 04:18:49.743589    3381 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.478045875s)
	I0821 04:18:49.743696    3381 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0821 04:18:49.749427    3381 docker.go:636] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0821 04:18:49.749436    3381 docker.go:642] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I0821 04:18:49.749440    3381 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0821 04:18:49.785092    3381 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0821 04:18:49.785092    3381 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0821 04:18:49.787129    3381 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0821 04:18:49.787137    3381 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0821 04:18:49.787151    3381 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0821 04:18:49.787258    3381 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0821 04:18:49.787269    3381 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0821 04:18:49.787572    3381 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0821 04:18:49.789313    3381 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0821 04:18:49.789419    3381 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I0821 04:18:49.792623    3381 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0821 04:18:49.793668    3381 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0821 04:18:49.793716    3381 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0821 04:18:49.793814    3381 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I0821 04:18:49.793854    3381 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0821 04:18:49.793865    3381 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	W0821 04:18:50.399541    3381 image.go:265] image registry.k8s.io/etcd:3.4.3-0 arch mismatch: want arm64 got amd64. fixing
	I0821 04:18:50.399675    3381 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I0821 04:18:50.405840    3381 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "29dd247b2572efbe28fcaea3fef1c5d72593da59f7350e3f6d2e6618983f9c03" in container runtime
	I0821 04:18:50.405869    3381 docker.go:316] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0821 04:18:50.405915    3381 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.4.3-0
	I0821 04:18:50.411224    3381 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17102-920/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.3-0
	W0821 04:18:50.455268    3381 image.go:265] image registry.k8s.io/kube-apiserver:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0821 04:18:50.455367    3381 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I0821 04:18:50.461606    3381 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "d353007847ec85700463981309a5846c8d9c93fbcd1323104266212926d68257" in container runtime
	I0821 04:18:50.461633    3381 docker.go:316] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0821 04:18:50.461686    3381 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0821 04:18:50.467737    3381 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17102-920/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.18.20
	W0821 04:18:50.647959    3381 image.go:265] image registry.k8s.io/kube-controller-manager:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0821 04:18:50.648064    3381 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I0821 04:18:50.654263    3381 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "297c79afbdb81ceb4cf857e0c54a0de7b6ce7ebe01e6cab68fc8baf342be3ea7" in container runtime
	I0821 04:18:50.654296    3381 docker.go:316] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0821 04:18:50.654367    3381 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0821 04:18:50.660164    3381 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17102-920/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.18.20
	W0821 04:18:50.837007    3381 image.go:265] image registry.k8s.io/kube-proxy:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0821 04:18:50.837167    3381 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I0821 04:18:50.844242    3381 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "b11cdc97ac6ac4ef2b3b0662edbe16597084b17cbc8e3d61fcaf4ef827a7ed18" in container runtime
	I0821 04:18:50.844265    3381 docker.go:316] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0821 04:18:50.844307    3381 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.18.20
	I0821 04:18:50.850193    3381 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17102-920/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.18.20
	I0821 04:18:51.047596    3381 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0821 04:18:51.054060    3381 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "2a060e2e7101d419352bf82c613158587400be743482d9a537ec4a9d1b4eb93c" in container runtime
	I0821 04:18:51.054089    3381 docker.go:316] Removing image: registry.k8s.io/pause:3.2
	I0821 04:18:51.054141    3381 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.2
	I0821 04:18:51.059896    3381 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17102-920/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	W0821 04:18:51.250288    3381 image.go:265] image registry.k8s.io/kube-scheduler:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0821 04:18:51.250399    3381 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I0821 04:18:51.257216    3381 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "177548d745cb87f773d02f41d453af2f2a1479dbe3c32e749cf6d8145c005e79" in container runtime
	I0821 04:18:51.257244    3381 docker.go:316] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0821 04:18:51.257288    3381 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0821 04:18:51.263506    3381 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17102-920/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.18.20
	W0821 04:18:51.476350    3381 image.go:265] image registry.k8s.io/coredns:1.6.7 arch mismatch: want arm64 got amd64. fixing
	I0821 04:18:51.476468    3381 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I0821 04:18:51.482842    3381 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "ff3af22d8878afc6985d3fec3e066d00ef431aa166c3a01ac58f1990adc92a2c" in container runtime
	I0821 04:18:51.482867    3381 docker.go:316] Removing image: registry.k8s.io/coredns:1.6.7
	I0821 04:18:51.482914    3381 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.7
	I0821 04:18:51.489297    3381 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17102-920/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.6.7
	W0821 04:18:52.205084    3381 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0821 04:18:52.205659    3381 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0821 04:18:52.230819    3381 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0821 04:18:52.230884    3381 docker.go:316] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0821 04:18:52.231004    3381 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0821 04:18:52.255643    3381 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17102-920/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0821 04:18:52.255735    3381 cache_images.go:92] LoadImages completed in 2.50630825s
	W0821 04:18:52.255806    3381 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/17102-920/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.3-0: no such file or directory
	I0821 04:18:52.255905    3381 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0821 04:18:52.270802    3381 cni.go:84] Creating CNI manager for ""
	I0821 04:18:52.270818    3381 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0821 04:18:52.270831    3381 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0821 04:18:52.270844    3381 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.6 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-717000 NodeName:ingress-addon-legacy-717000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.6"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.6 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0821 04:18:52.270969    3381 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.6
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "ingress-addon-legacy-717000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.6
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.6"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0821 04:18:52.271035    3381 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-717000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-717000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0821 04:18:52.271104    3381 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0821 04:18:52.276332    3381 binaries.go:44] Found k8s binaries, skipping transfer
	I0821 04:18:52.276373    3381 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0821 04:18:52.280522    3381 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (355 bytes)
	I0821 04:18:52.287299    3381 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0821 04:18:52.293127    3381 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2127 bytes)
	I0821 04:18:52.298718    3381 ssh_runner.go:195] Run: grep 192.168.105.6	control-plane.minikube.internal$ /etc/hosts
	I0821 04:18:52.299991    3381 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.6	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0821 04:18:52.303725    3381 certs.go:56] Setting up /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/ingress-addon-legacy-717000 for IP: 192.168.105.6
	I0821 04:18:52.303737    3381 certs.go:190] acquiring lock for shared ca certs: {Name:mkaf8bee91c9bef113528e728629bac5c142d5d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 04:18:52.303898    3381 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/17102-920/.minikube/ca.key
	I0821 04:18:52.303935    3381 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/17102-920/.minikube/proxy-client-ca.key
	I0821 04:18:52.303961    3381 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/ingress-addon-legacy-717000/client.key
	I0821 04:18:52.303969    3381 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/ingress-addon-legacy-717000/client.crt with IP's: []
	I0821 04:18:52.373638    3381 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/ingress-addon-legacy-717000/client.crt ...
	I0821 04:18:52.373646    3381 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/ingress-addon-legacy-717000/client.crt: {Name:mk8a5d96ae7e2e024bdce331a1647c87655dbf46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 04:18:52.373883    3381 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/ingress-addon-legacy-717000/client.key ...
	I0821 04:18:52.373891    3381 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/ingress-addon-legacy-717000/client.key: {Name:mk3dab96888b5f9efd0406a250572064306e99bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 04:18:52.374017    3381 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/ingress-addon-legacy-717000/apiserver.key.b354f644
	I0821 04:18:52.374023    3381 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/ingress-addon-legacy-717000/apiserver.crt.b354f644 with IP's: [192.168.105.6 10.96.0.1 127.0.0.1 10.0.0.1]
	I0821 04:18:52.451493    3381 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/ingress-addon-legacy-717000/apiserver.crt.b354f644 ...
	I0821 04:18:52.451496    3381 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/ingress-addon-legacy-717000/apiserver.crt.b354f644: {Name:mk1843646ad142f0d69b77ce705a6fda35943868 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 04:18:52.451640    3381 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/ingress-addon-legacy-717000/apiserver.key.b354f644 ...
	I0821 04:18:52.451643    3381 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/ingress-addon-legacy-717000/apiserver.key.b354f644: {Name:mk5ab969beeb36c54d7be7ca0ec20ab5600ae416 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 04:18:52.451737    3381 certs.go:337] copying /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/ingress-addon-legacy-717000/apiserver.crt.b354f644 -> /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/ingress-addon-legacy-717000/apiserver.crt
	I0821 04:18:52.451897    3381 certs.go:341] copying /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/ingress-addon-legacy-717000/apiserver.key.b354f644 -> /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/ingress-addon-legacy-717000/apiserver.key
	I0821 04:18:52.451990    3381 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/ingress-addon-legacy-717000/proxy-client.key
	I0821 04:18:52.451998    3381 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/ingress-addon-legacy-717000/proxy-client.crt with IP's: []
	I0821 04:18:52.533628    3381 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/ingress-addon-legacy-717000/proxy-client.crt ...
	I0821 04:18:52.533631    3381 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/ingress-addon-legacy-717000/proxy-client.crt: {Name:mk7b9c53e74a2297b0cd8abfa2b255c8514fd4f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 04:18:52.533767    3381 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/ingress-addon-legacy-717000/proxy-client.key ...
	I0821 04:18:52.533770    3381 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/ingress-addon-legacy-717000/proxy-client.key: {Name:mk18018f475b6cf28721a134a02d56e63951df3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 04:18:52.533875    3381 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/ingress-addon-legacy-717000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0821 04:18:52.533889    3381 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/ingress-addon-legacy-717000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0821 04:18:52.533900    3381 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/ingress-addon-legacy-717000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0821 04:18:52.533911    3381 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/ingress-addon-legacy-717000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0821 04:18:52.533923    3381 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17102-920/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0821 04:18:52.533940    3381 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17102-920/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0821 04:18:52.533951    3381 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17102-920/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0821 04:18:52.533971    3381 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17102-920/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0821 04:18:52.534047    3381 certs.go:437] found cert: /Users/jenkins/minikube-integration/17102-920/.minikube/certs/Users/jenkins/minikube-integration/17102-920/.minikube/certs/1362.pem (1338 bytes)
	W0821 04:18:52.534077    3381 certs.go:433] ignoring /Users/jenkins/minikube-integration/17102-920/.minikube/certs/Users/jenkins/minikube-integration/17102-920/.minikube/certs/1362_empty.pem, impossibly tiny 0 bytes
	I0821 04:18:52.534085    3381 certs.go:437] found cert: /Users/jenkins/minikube-integration/17102-920/.minikube/certs/Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca-key.pem (1679 bytes)
	I0821 04:18:52.534107    3381 certs.go:437] found cert: /Users/jenkins/minikube-integration/17102-920/.minikube/certs/Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca.pem (1078 bytes)
	I0821 04:18:52.534127    3381 certs.go:437] found cert: /Users/jenkins/minikube-integration/17102-920/.minikube/certs/Users/jenkins/minikube-integration/17102-920/.minikube/certs/cert.pem (1123 bytes)
	I0821 04:18:52.534152    3381 certs.go:437] found cert: /Users/jenkins/minikube-integration/17102-920/.minikube/certs/Users/jenkins/minikube-integration/17102-920/.minikube/certs/key.pem (1679 bytes)
	I0821 04:18:52.534200    3381 certs.go:437] found cert: /Users/jenkins/minikube-integration/17102-920/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/17102-920/.minikube/files/etc/ssl/certs/13622.pem (1708 bytes)
	I0821 04:18:52.534222    3381 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17102-920/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0821 04:18:52.534232    3381 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17102-920/.minikube/certs/1362.pem -> /usr/share/ca-certificates/1362.pem
	I0821 04:18:52.534243    3381 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17102-920/.minikube/files/etc/ssl/certs/13622.pem -> /usr/share/ca-certificates/13622.pem
	I0821 04:18:52.534617    3381 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/ingress-addon-legacy-717000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0821 04:18:52.541985    3381 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/ingress-addon-legacy-717000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0821 04:18:52.549059    3381 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/ingress-addon-legacy-717000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0821 04:18:52.556274    3381 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/ingress-addon-legacy-717000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0821 04:18:52.563249    3381 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0821 04:18:52.570112    3381 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0821 04:18:52.576804    3381 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0821 04:18:52.584171    3381 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0821 04:18:52.591604    3381 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0821 04:18:52.598344    3381 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/certs/1362.pem --> /usr/share/ca-certificates/1362.pem (1338 bytes)
	I0821 04:18:52.605094    3381 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17102-920/.minikube/files/etc/ssl/certs/13622.pem --> /usr/share/ca-certificates/13622.pem (1708 bytes)
	I0821 04:18:52.612167    3381 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0821 04:18:52.617432    3381 ssh_runner.go:195] Run: openssl version
	I0821 04:18:52.619498    3381 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0821 04:18:52.622606    3381 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0821 04:18:52.624054    3381 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 21 10:34 /usr/share/ca-certificates/minikubeCA.pem
	I0821 04:18:52.624073    3381 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0821 04:18:52.626045    3381 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0821 04:18:52.629334    3381 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1362.pem && ln -fs /usr/share/ca-certificates/1362.pem /etc/ssl/certs/1362.pem"
	I0821 04:18:52.632783    3381 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1362.pem
	I0821 04:18:52.634449    3381 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Aug 21 11:14 /usr/share/ca-certificates/1362.pem
	I0821 04:18:52.634470    3381 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1362.pem
	I0821 04:18:52.636275    3381 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1362.pem /etc/ssl/certs/51391683.0"
	I0821 04:18:52.639487    3381 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13622.pem && ln -fs /usr/share/ca-certificates/13622.pem /etc/ssl/certs/13622.pem"
	I0821 04:18:52.642677    3381 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13622.pem
	I0821 04:18:52.644144    3381 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Aug 21 11:14 /usr/share/ca-certificates/13622.pem
	I0821 04:18:52.644164    3381 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13622.pem
	I0821 04:18:52.646087    3381 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/13622.pem /etc/ssl/certs/3ec20f2e.0"
	I0821 04:18:52.649284    3381 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0821 04:18:52.650758    3381 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0821 04:18:52.650787    3381 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-717000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress
-addon-legacy-717000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.6 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0821 04:18:52.650860    3381 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0821 04:18:52.656595    3381 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0821 04:18:52.660166    3381 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0821 04:18:52.663110    3381 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0821 04:18:52.665766    3381 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0821 04:18:52.665782    3381 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0821 04:18:52.690314    3381 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0821 04:18:52.690380    3381 kubeadm.go:322] [preflight] Running pre-flight checks
	I0821 04:18:52.772808    3381 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0821 04:18:52.772863    3381 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0821 04:18:52.772924    3381 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0821 04:18:52.817707    3381 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0821 04:18:52.818640    3381 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0821 04:18:52.818664    3381 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0821 04:18:52.897598    3381 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0821 04:18:52.904779    3381 out.go:204]   - Generating certificates and keys ...
	I0821 04:18:52.904812    3381 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0821 04:18:52.904851    3381 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0821 04:18:52.956847    3381 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0821 04:18:53.092288    3381 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0821 04:18:53.320166    3381 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0821 04:18:53.449023    3381 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0821 04:18:53.579284    3381 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0821 04:18:53.579370    3381 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-717000 localhost] and IPs [192.168.105.6 127.0.0.1 ::1]
	I0821 04:18:53.674281    3381 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0821 04:18:53.674365    3381 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-717000 localhost] and IPs [192.168.105.6 127.0.0.1 ::1]
	I0821 04:18:53.798852    3381 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0821 04:18:53.969680    3381 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0821 04:18:54.064670    3381 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0821 04:18:54.064783    3381 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0821 04:18:54.174268    3381 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0821 04:18:54.216956    3381 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0821 04:18:54.473139    3381 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0821 04:18:54.726431    3381 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0821 04:18:54.726638    3381 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0821 04:18:54.734117    3381 out.go:204]   - Booting up control plane ...
	I0821 04:18:54.734222    3381 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0821 04:18:54.734260    3381 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0821 04:18:54.734298    3381 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0821 04:18:54.734349    3381 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0821 04:18:54.734603    3381 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0821 04:19:05.740749    3381 kubeadm.go:322] [apiclient] All control plane components are healthy after 11.005079 seconds
	I0821 04:19:05.741053    3381 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0821 04:19:05.764319    3381 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I0821 04:19:06.287250    3381 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0821 04:19:06.287340    3381 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-717000 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0821 04:19:06.800756    3381 kubeadm.go:322] [bootstrap-token] Using token: gvj489.w7w9pizcxemit0fq
	I0821 04:19:06.804210    3381 out.go:204]   - Configuring RBAC rules ...
	I0821 04:19:06.804398    3381 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0821 04:19:06.807496    3381 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0821 04:19:06.816421    3381 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0821 04:19:06.818246    3381 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0821 04:19:06.820050    3381 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0821 04:19:06.821915    3381 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0821 04:19:06.828865    3381 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0821 04:19:07.023131    3381 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0821 04:19:07.209904    3381 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0821 04:19:07.210591    3381 kubeadm.go:322] 
	I0821 04:19:07.210636    3381 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0821 04:19:07.210643    3381 kubeadm.go:322] 
	I0821 04:19:07.210686    3381 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0821 04:19:07.210689    3381 kubeadm.go:322] 
	I0821 04:19:07.210724    3381 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0821 04:19:07.210760    3381 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0821 04:19:07.210788    3381 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0821 04:19:07.210792    3381 kubeadm.go:322] 
	I0821 04:19:07.210826    3381 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0821 04:19:07.210931    3381 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0821 04:19:07.211015    3381 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0821 04:19:07.211024    3381 kubeadm.go:322] 
	I0821 04:19:07.211099    3381 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0821 04:19:07.211156    3381 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0821 04:19:07.211162    3381 kubeadm.go:322] 
	I0821 04:19:07.211228    3381 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token gvj489.w7w9pizcxemit0fq \
	I0821 04:19:07.211308    3381 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:c361d9930575cb4141f86c9c696a425212668e350af0245a5e7de41b1bd48407 \
	I0821 04:19:07.211325    3381 kubeadm.go:322]     --control-plane 
	I0821 04:19:07.211333    3381 kubeadm.go:322] 
	I0821 04:19:07.211395    3381 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0821 04:19:07.211406    3381 kubeadm.go:322] 
	I0821 04:19:07.211477    3381 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token gvj489.w7w9pizcxemit0fq \
	I0821 04:19:07.211560    3381 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:c361d9930575cb4141f86c9c696a425212668e350af0245a5e7de41b1bd48407 
	I0821 04:19:07.211716    3381 kubeadm.go:322] W0821 11:18:52.823388    1426 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0821 04:19:07.211825    3381 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0821 04:19:07.211936    3381 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.4. Latest validated version: 19.03
	I0821 04:19:07.212018    3381 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0821 04:19:07.212113    3381 kubeadm.go:322] W0821 11:18:54.864344    1426 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0821 04:19:07.212216    3381 kubeadm.go:322] W0821 11:18:54.865226    1426 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0821 04:19:07.212225    3381 cni.go:84] Creating CNI manager for ""
	I0821 04:19:07.212233    3381 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0821 04:19:07.212246    3381 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0821 04:19:07.212331    3381 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 04:19:07.212385    3381 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43 minikube.k8s.io/name=ingress-addon-legacy-717000 minikube.k8s.io/updated_at=2023_08_21T04_19_07_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 04:19:07.217941    3381 ops.go:34] apiserver oom_adj: -16
	I0821 04:19:07.275541    3381 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 04:19:07.314757    3381 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 04:19:07.855503    3381 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 04:19:08.355674    3381 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 04:19:08.855431    3381 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 04:19:09.355568    3381 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 04:19:09.855963    3381 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 04:19:10.355629    3381 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 04:19:10.855611    3381 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 04:19:11.355652    3381 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 04:19:11.855566    3381 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 04:19:12.355357    3381 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 04:19:12.855611    3381 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 04:19:13.355576    3381 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 04:19:13.855505    3381 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 04:19:14.355580    3381 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 04:19:14.855644    3381 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 04:19:15.355339    3381 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 04:19:15.855602    3381 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 04:19:16.355656    3381 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 04:19:16.855612    3381 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 04:19:17.355537    3381 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 04:19:17.855579    3381 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 04:19:18.355523    3381 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 04:19:18.855321    3381 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 04:19:19.354204    3381 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 04:19:19.855337    3381 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 04:19:20.355657    3381 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 04:19:20.855347    3381 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 04:19:21.355421    3381 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 04:19:21.855527    3381 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 04:19:21.959963    3381 kubeadm.go:1081] duration metric: took 14.747829625s to wait for elevateKubeSystemPrivileges.
	I0821 04:19:21.959977    3381 kubeadm.go:406] StartCluster complete in 29.309443417s
	I0821 04:19:21.959986    3381 settings.go:142] acquiring lock: {Name:mkeb461ec3a6a92ee32ce41e8df63d6759cb2728 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 04:19:21.960066    3381 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/17102-920/kubeconfig
	I0821 04:19:21.960444    3381 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17102-920/kubeconfig: {Name:mk2bc9c64ad130c36a0253707ac2ba3f8fd22371 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 04:19:21.960611    3381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0821 04:19:21.960647    3381 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0821 04:19:21.960688    3381 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-717000"
	I0821 04:19:21.960700    3381 addons.go:231] Setting addon storage-provisioner=true in "ingress-addon-legacy-717000"
	I0821 04:19:21.960699    3381 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-717000"
	I0821 04:19:21.960715    3381 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-717000"
	I0821 04:19:21.960724    3381 host.go:66] Checking if "ingress-addon-legacy-717000" exists ...
	I0821 04:19:21.960858    3381 config.go:182] Loaded profile config "ingress-addon-legacy-717000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0821 04:19:21.960939    3381 kapi.go:59] client config for ingress-addon-legacy-717000: &rest.Config{Host:"https://192.168.105.6:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/17102-920/.minikube/profiles/ingress-addon-legacy-717000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/17102-920/.minikube/profiles/ingress-addon-legacy-717000/client.key", CAFile:"/Users/jenkins/minikube-integration/17102-920/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(n
il), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1023275f0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0821 04:19:21.961390    3381 cert_rotation.go:137] Starting client certificate rotation controller
	I0821 04:19:21.961730    3381 kapi.go:59] client config for ingress-addon-legacy-717000: &rest.Config{Host:"https://192.168.105.6:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/17102-920/.minikube/profiles/ingress-addon-legacy-717000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/17102-920/.minikube/profiles/ingress-addon-legacy-717000/client.key", CAFile:"/Users/jenkins/minikube-integration/17102-920/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(n
il), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1023275f0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0821 04:19:21.969455    3381 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0821 04:19:21.972463    3381 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0821 04:19:21.972470    3381 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0821 04:19:21.972479    3381 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/ingress-addon-legacy-717000/id_rsa Username:docker}
	I0821 04:19:21.978559    3381 addons.go:231] Setting addon default-storageclass=true in "ingress-addon-legacy-717000"
	I0821 04:19:21.978579    3381 host.go:66] Checking if "ingress-addon-legacy-717000" exists ...
	I0821 04:19:21.979257    3381 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0821 04:19:21.979262    3381 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0821 04:19:21.979268    3381 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/ingress-addon-legacy-717000/id_rsa Username:docker}
	I0821 04:19:21.984970    3381 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-717000" context rescaled to 1 replicas
	I0821 04:19:21.984993    3381 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.6 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0821 04:19:21.986422    3381 out.go:177] * Verifying Kubernetes components...
	I0821 04:19:21.993358    3381 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0821 04:19:22.023080    3381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0821 04:19:22.023329    3381 kapi.go:59] client config for ingress-addon-legacy-717000: &rest.Config{Host:"https://192.168.105.6:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/17102-920/.minikube/profiles/ingress-addon-legacy-717000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/17102-920/.minikube/profiles/ingress-addon-legacy-717000/client.key", CAFile:"/Users/jenkins/minikube-integration/17102-920/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(n
il), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1023275f0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0821 04:19:22.023461    3381 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-717000" to be "Ready" ...
	I0821 04:19:22.024930    3381 node_ready.go:49] node "ingress-addon-legacy-717000" has status "Ready":"True"
	I0821 04:19:22.024936    3381 node_ready.go:38] duration metric: took 1.467417ms waiting for node "ingress-addon-legacy-717000" to be "Ready" ...
	I0821 04:19:22.024939    3381 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0821 04:19:22.027599    3381 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-717000" in "kube-system" namespace to be "Ready" ...
	I0821 04:19:22.050065    3381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0821 04:19:22.056423    3381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0821 04:19:22.223314    3381 start.go:901] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I0821 04:19:22.310275    3381 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0821 04:19:22.318958    3381 addons.go:502] enable addons completed in 358.313916ms: enabled=[storage-provisioner default-storageclass]
	I0821 04:19:24.046882    3381 pod_ready.go:92] pod "etcd-ingress-addon-legacy-717000" in "kube-system" namespace has status "Ready":"True"
	I0821 04:19:24.046919    3381 pod_ready.go:81] duration metric: took 2.019324333s waiting for pod "etcd-ingress-addon-legacy-717000" in "kube-system" namespace to be "Ready" ...
	I0821 04:19:24.046937    3381 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-717000" in "kube-system" namespace to be "Ready" ...
	I0821 04:19:24.054366    3381 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-717000" in "kube-system" namespace has status "Ready":"True"
	I0821 04:19:24.054393    3381 pod_ready.go:81] duration metric: took 7.443916ms waiting for pod "kube-apiserver-ingress-addon-legacy-717000" in "kube-system" namespace to be "Ready" ...
	I0821 04:19:24.054407    3381 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-717000" in "kube-system" namespace to be "Ready" ...
	I0821 04:19:24.061798    3381 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-717000" in "kube-system" namespace has status "Ready":"True"
	I0821 04:19:24.061820    3381 pod_ready.go:81] duration metric: took 7.375542ms waiting for pod "kube-controller-manager-ingress-addon-legacy-717000" in "kube-system" namespace to be "Ready" ...
	I0821 04:19:24.061835    3381 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-717000" in "kube-system" namespace to be "Ready" ...
	I0821 04:19:24.067871    3381 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-717000" in "kube-system" namespace has status "Ready":"True"
	I0821 04:19:24.067882    3381 pod_ready.go:81] duration metric: took 6.0375ms waiting for pod "kube-scheduler-ingress-addon-legacy-717000" in "kube-system" namespace to be "Ready" ...
	I0821 04:19:24.067890    3381 pod_ready.go:38] duration metric: took 2.042961417s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0821 04:19:24.067921    3381 api_server.go:52] waiting for apiserver process to appear ...
	I0821 04:19:24.068109    3381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0821 04:19:24.078363    3381 api_server.go:72] duration metric: took 2.093373791s to wait for apiserver process to appear ...
	I0821 04:19:24.078380    3381 api_server.go:88] waiting for apiserver healthz status ...
	I0821 04:19:24.078402    3381 api_server.go:253] Checking apiserver healthz at https://192.168.105.6:8443/healthz ...
	I0821 04:19:24.084898    3381 api_server.go:279] https://192.168.105.6:8443/healthz returned 200:
	ok
	I0821 04:19:24.085613    3381 api_server.go:141] control plane version: v1.18.20
	I0821 04:19:24.085624    3381 api_server.go:131] duration metric: took 7.239084ms to wait for apiserver health ...
	I0821 04:19:24.085629    3381 system_pods.go:43] waiting for kube-system pods to appear ...
	I0821 04:19:24.225321    3381 request.go:629] Waited for 139.593542ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/namespaces/kube-system/pods
	I0821 04:19:24.231104    3381 system_pods.go:59] 7 kube-system pods found
	I0821 04:19:24.231127    3381 system_pods.go:61] "coredns-66bff467f8-h4xvc" [50b12795-6486-490b-b013-62afccd3af2a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0821 04:19:24.231137    3381 system_pods.go:61] "etcd-ingress-addon-legacy-717000" [785c3149-b079-4d22-b56a-584eade9ff2d] Running
	I0821 04:19:24.231144    3381 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-717000" [ca2cc0a6-dce1-4b9a-aacf-2cf562e3879a] Running
	I0821 04:19:24.231151    3381 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-717000" [e716b048-4a1f-4e3b-87e3-f7e5b9ab53da] Running
	I0821 04:19:24.231158    3381 system_pods.go:61] "kube-proxy-tbz4v" [427b4ab1-4357-403d-b7f7-4ab06190bf7f] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0821 04:19:24.231165    3381 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-717000" [95491812-40aa-4c5a-90fe-6e2d65d42c3c] Running
	I0821 04:19:24.231170    3381 system_pods.go:61] "storage-provisioner" [e21091b8-7aa3-47d2-95be-638375a32aad] Pending
	I0821 04:19:24.231176    3381 system_pods.go:74] duration metric: took 145.543208ms to wait for pod list to return data ...
	I0821 04:19:24.231183    3381 default_sa.go:34] waiting for default service account to be created ...
	I0821 04:19:24.425287    3381 request.go:629] Waited for 194.044ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/namespaces/default/serviceaccounts
	I0821 04:19:24.426854    3381 default_sa.go:45] found service account: "default"
	I0821 04:19:24.426862    3381 default_sa.go:55] duration metric: took 195.676ms for default service account to be created ...
	I0821 04:19:24.426866    3381 system_pods.go:116] waiting for k8s-apps to be running ...
	I0821 04:19:24.625497    3381 request.go:629] Waited for 198.599667ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/namespaces/kube-system/pods
	I0821 04:19:24.628839    3381 system_pods.go:86] 7 kube-system pods found
	I0821 04:19:24.628852    3381 system_pods.go:89] "coredns-66bff467f8-h4xvc" [50b12795-6486-490b-b013-62afccd3af2a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0821 04:19:24.628856    3381 system_pods.go:89] "etcd-ingress-addon-legacy-717000" [785c3149-b079-4d22-b56a-584eade9ff2d] Running
	I0821 04:19:24.628859    3381 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-717000" [ca2cc0a6-dce1-4b9a-aacf-2cf562e3879a] Running
	I0821 04:19:24.628863    3381 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-717000" [e716b048-4a1f-4e3b-87e3-f7e5b9ab53da] Running
	I0821 04:19:24.628867    3381 system_pods.go:89] "kube-proxy-tbz4v" [427b4ab1-4357-403d-b7f7-4ab06190bf7f] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0821 04:19:24.628870    3381 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-717000" [95491812-40aa-4c5a-90fe-6e2d65d42c3c] Running
	I0821 04:19:24.628873    3381 system_pods.go:89] "storage-provisioner" [e21091b8-7aa3-47d2-95be-638375a32aad] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0821 04:19:24.628888    3381 retry.go:31] will retry after 199.484813ms: missing components: kube-proxy
	I0821 04:19:24.837585    3381 system_pods.go:86] 7 kube-system pods found
	I0821 04:19:24.837616    3381 system_pods.go:89] "coredns-66bff467f8-h4xvc" [50b12795-6486-490b-b013-62afccd3af2a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0821 04:19:24.837627    3381 system_pods.go:89] "etcd-ingress-addon-legacy-717000" [785c3149-b079-4d22-b56a-584eade9ff2d] Running
	I0821 04:19:24.837635    3381 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-717000" [ca2cc0a6-dce1-4b9a-aacf-2cf562e3879a] Running
	I0821 04:19:24.837642    3381 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-717000" [e716b048-4a1f-4e3b-87e3-f7e5b9ab53da] Running
	I0821 04:19:24.837648    3381 system_pods.go:89] "kube-proxy-tbz4v" [427b4ab1-4357-403d-b7f7-4ab06190bf7f] Running
	I0821 04:19:24.837658    3381 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-717000" [95491812-40aa-4c5a-90fe-6e2d65d42c3c] Running
	I0821 04:19:24.837665    3381 system_pods.go:89] "storage-provisioner" [e21091b8-7aa3-47d2-95be-638375a32aad] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0821 04:19:24.837674    3381 system_pods.go:126] duration metric: took 410.805208ms to wait for k8s-apps to be running ...
	I0821 04:19:24.837683    3381 system_svc.go:44] waiting for kubelet service to be running ....
	I0821 04:19:24.837834    3381 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0821 04:19:24.849246    3381 system_svc.go:56] duration metric: took 11.558708ms WaitForService to wait for kubelet.
	I0821 04:19:24.849260    3381 kubeadm.go:581] duration metric: took 2.864278291s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0821 04:19:24.849280    3381 node_conditions.go:102] verifying NodePressure condition ...
	I0821 04:19:25.025524    3381 request.go:629] Waited for 176.184167ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/nodes
	I0821 04:19:25.029288    3381 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0821 04:19:25.029318    3381 node_conditions.go:123] node cpu capacity is 2
	I0821 04:19:25.029330    3381 node_conditions.go:105] duration metric: took 180.046583ms to run NodePressure ...
	I0821 04:19:25.029348    3381 start.go:228] waiting for startup goroutines ...
	I0821 04:19:25.029363    3381 start.go:233] waiting for cluster config update ...
	I0821 04:19:25.029375    3381 start.go:242] writing updated cluster config ...
	I0821 04:19:25.030003    3381 ssh_runner.go:195] Run: rm -f paused
	I0821 04:19:25.079856    3381 start.go:600] kubectl: 1.27.2, cluster: 1.18.20 (minor skew: 9)
	I0821 04:19:25.083825    3381 out.go:177] 
	W0821 04:19:25.088078    3381 out.go:239] ! /usr/local/bin/kubectl is version 1.27.2, which may have incompatibilities with Kubernetes 1.18.20.
	I0821 04:19:25.091947    3381 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I0821 04:19:25.102809    3381 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-717000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Mon 2023-08-21 11:18:37 UTC, ends at Mon 2023-08-21 11:20:31 UTC. --
	Aug 21 11:20:07 ingress-addon-legacy-717000 dockerd[1107]: time="2023-08-21T11:20:07.309851351Z" level=info msg="shim disconnected" id=30ad566f2956f5d0aa078c6f9f0b3b8ef002b2cc71eb7969e31f9344be5902d3 namespace=moby
	Aug 21 11:20:07 ingress-addon-legacy-717000 dockerd[1107]: time="2023-08-21T11:20:07.309878850Z" level=warning msg="cleaning up after shim disconnected" id=30ad566f2956f5d0aa078c6f9f0b3b8ef002b2cc71eb7969e31f9344be5902d3 namespace=moby
	Aug 21 11:20:07 ingress-addon-legacy-717000 dockerd[1107]: time="2023-08-21T11:20:07.309883142Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 21 11:20:21 ingress-addon-legacy-717000 dockerd[1101]: time="2023-08-21T11:20:21.644399037Z" level=info msg="ignoring event" container=97484c1d61859fb48ea2edd82b9c1e231cb9866353751f1c0dfe93836a894df5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 21 11:20:21 ingress-addon-legacy-717000 dockerd[1107]: time="2023-08-21T11:20:21.645175731Z" level=info msg="shim disconnected" id=97484c1d61859fb48ea2edd82b9c1e231cb9866353751f1c0dfe93836a894df5 namespace=moby
	Aug 21 11:20:21 ingress-addon-legacy-717000 dockerd[1107]: time="2023-08-21T11:20:21.645224980Z" level=warning msg="cleaning up after shim disconnected" id=97484c1d61859fb48ea2edd82b9c1e231cb9866353751f1c0dfe93836a894df5 namespace=moby
	Aug 21 11:20:21 ingress-addon-legacy-717000 dockerd[1107]: time="2023-08-21T11:20:21.645234355Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 21 11:20:22 ingress-addon-legacy-717000 dockerd[1107]: time="2023-08-21T11:20:22.638183859Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 21 11:20:22 ingress-addon-legacy-717000 dockerd[1107]: time="2023-08-21T11:20:22.638235858Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 21 11:20:22 ingress-addon-legacy-717000 dockerd[1107]: time="2023-08-21T11:20:22.638249983Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 21 11:20:22 ingress-addon-legacy-717000 dockerd[1107]: time="2023-08-21T11:20:22.638259399Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 21 11:20:22 ingress-addon-legacy-717000 dockerd[1101]: time="2023-08-21T11:20:22.690801733Z" level=info msg="ignoring event" container=37ae92a134c091854caf9967ba43435598a27c82f11e99198826a3ec55b37b4a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 21 11:20:22 ingress-addon-legacy-717000 dockerd[1107]: time="2023-08-21T11:20:22.691098811Z" level=info msg="shim disconnected" id=37ae92a134c091854caf9967ba43435598a27c82f11e99198826a3ec55b37b4a namespace=moby
	Aug 21 11:20:22 ingress-addon-legacy-717000 dockerd[1107]: time="2023-08-21T11:20:22.691422014Z" level=warning msg="cleaning up after shim disconnected" id=37ae92a134c091854caf9967ba43435598a27c82f11e99198826a3ec55b37b4a namespace=moby
	Aug 21 11:20:22 ingress-addon-legacy-717000 dockerd[1107]: time="2023-08-21T11:20:22.691432680Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 21 11:20:26 ingress-addon-legacy-717000 dockerd[1101]: time="2023-08-21T11:20:26.101737669Z" level=info msg="Container failed to exit within 2s of signal 15 - using the force" container=844547a0ad10c736781819dba597f82ad5537e508575d919060cef5b6f238ef6
	Aug 21 11:20:26 ingress-addon-legacy-717000 dockerd[1101]: time="2023-08-21T11:20:26.107050207Z" level=info msg="Container failed to exit within 2s of signal 15 - using the force" container=844547a0ad10c736781819dba597f82ad5537e508575d919060cef5b6f238ef6
	Aug 21 11:20:26 ingress-addon-legacy-717000 dockerd[1101]: time="2023-08-21T11:20:26.179931136Z" level=info msg="ignoring event" container=844547a0ad10c736781819dba597f82ad5537e508575d919060cef5b6f238ef6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 21 11:20:26 ingress-addon-legacy-717000 dockerd[1107]: time="2023-08-21T11:20:26.180299588Z" level=info msg="shim disconnected" id=844547a0ad10c736781819dba597f82ad5537e508575d919060cef5b6f238ef6 namespace=moby
	Aug 21 11:20:26 ingress-addon-legacy-717000 dockerd[1107]: time="2023-08-21T11:20:26.180695790Z" level=warning msg="cleaning up after shim disconnected" id=844547a0ad10c736781819dba597f82ad5537e508575d919060cef5b6f238ef6 namespace=moby
	Aug 21 11:20:26 ingress-addon-legacy-717000 dockerd[1107]: time="2023-08-21T11:20:26.180744081Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 21 11:20:26 ingress-addon-legacy-717000 dockerd[1101]: time="2023-08-21T11:20:26.223419964Z" level=info msg="ignoring event" container=6a80a9c44c8bfc2f768776e0d8e76dadad0bc49a8fe75beed0fb8e5e48903237 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 21 11:20:26 ingress-addon-legacy-717000 dockerd[1107]: time="2023-08-21T11:20:26.224041121Z" level=info msg="shim disconnected" id=6a80a9c44c8bfc2f768776e0d8e76dadad0bc49a8fe75beed0fb8e5e48903237 namespace=moby
	Aug 21 11:20:26 ingress-addon-legacy-717000 dockerd[1107]: time="2023-08-21T11:20:26.224103162Z" level=warning msg="cleaning up after shim disconnected" id=6a80a9c44c8bfc2f768776e0d8e76dadad0bc49a8fe75beed0fb8e5e48903237 namespace=moby
	Aug 21 11:20:26 ingress-addon-legacy-717000 dockerd[1107]: time="2023-08-21T11:20:26.224109037Z" level=info msg="cleaning up dead shim" namespace=moby
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID
	37ae92a134c09       13753a81eccfd                                                                                                      9 seconds ago        Exited              hello-world-app           2                   3b06750dfadc1
	b7a9a5c342edb       nginx@sha256:cac882be2b7305e0c8d3e3cd0575a2fd58f5fde6dd5d6299605aa0f3e67ca385                                      34 seconds ago       Running             nginx                     0                   23557332d72b7
	844547a0ad10c       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   54 seconds ago       Exited              controller                0                   6a80a9c44c8bf
	370349d6e099d       a883f7fc35610                                                                                                      About a minute ago   Exited              patch                     1                   0778981b030c1
	a7703b8227d60       jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7               About a minute ago   Exited              create                    0                   96631145fa4ec
	900fda56bba02       gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944    About a minute ago   Running             storage-provisioner       0                   e873cac75bc61
	3d1d800f4ecef       565297bc6f7d4                                                                                                      About a minute ago   Running             kube-proxy                0                   bac82ebd4a818
	275c8ef74251c       6e17ba78cf3eb                                                                                                      About a minute ago   Running             coredns                   0                   fd852cc0666f2
	2418cdac816a9       ab707b0a0ea33                                                                                                      About a minute ago   Running             etcd                      0                   a4d24feae4b8c
	7e27fa5ce993c       68a4fac29a865                                                                                                      About a minute ago   Running             kube-controller-manager   0                   02d5bec194ca4
	ae3cab0c75657       095f37015706d                                                                                                      About a minute ago   Running             kube-scheduler            0                   b12038e6e0051
	b0e705dfecd3a       2694cf044d665                                                                                                      About a minute ago   Running             kube-apiserver            0                   1591b8263a2a9
	
	* 
	* ==> coredns [275c8ef74251] <==
	* [INFO] 172.17.0.1:9569 - 26204 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000008458s
	[INFO] 172.17.0.1:9569 - 22750 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000026249s
	[INFO] 172.17.0.1:63081 - 61465 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000008833s
	[INFO] 172.17.0.1:9569 - 30454 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000026124s
	[INFO] 172.17.0.1:63081 - 22493 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000008208s
	[INFO] 172.17.0.1:9569 - 46913 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000036749s
	[INFO] 172.17.0.1:63081 - 18077 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000030166s
	[INFO] 172.17.0.1:63081 - 6983 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000032832s
	[INFO] 172.17.0.1:63081 - 38192 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000037665s
	[INFO] 172.17.0.1:9569 - 19102 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000008625s
	[INFO] 172.17.0.1:9569 - 31398 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000022541s
	[INFO] 172.17.0.1:33176 - 33225 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000017291s
	[INFO] 172.17.0.1:33176 - 25010 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000009833s
	[INFO] 172.17.0.1:33176 - 8168 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000016833s
	[INFO] 172.17.0.1:33176 - 50542 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000014958s
	[INFO] 172.17.0.1:33176 - 50071 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000011958s
	[INFO] 172.17.0.1:33176 - 48700 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000011624s
	[INFO] 172.17.0.1:33176 - 57860 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000029665s
	[INFO] 172.17.0.1:57450 - 17376 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000017291s
	[INFO] 172.17.0.1:57450 - 52105 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000017166s
	[INFO] 172.17.0.1:57450 - 32952 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000012874s
	[INFO] 172.17.0.1:57450 - 60601 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000012958s
	[INFO] 172.17.0.1:57450 - 5665 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000013874s
	[INFO] 172.17.0.1:57450 - 25437 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000012458s
	[INFO] 172.17.0.1:57450 - 46779 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000019958s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-717000
	Roles:              master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ingress-addon-legacy-717000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43
	                    minikube.k8s.io/name=ingress-addon-legacy-717000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_08_21T04_19_07_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 21 Aug 2023 11:19:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-717000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 21 Aug 2023 11:20:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 21 Aug 2023 11:20:13 +0000   Mon, 21 Aug 2023 11:19:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 21 Aug 2023 11:20:13 +0000   Mon, 21 Aug 2023 11:19:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 21 Aug 2023 11:20:13 +0000   Mon, 21 Aug 2023 11:19:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 21 Aug 2023 11:20:13 +0000   Mon, 21 Aug 2023 11:19:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.6
	  Hostname:    ingress-addon-legacy-717000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             4003124Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             4003124Ki
	  pods:               110
	System Info:
	  Machine ID:                 7bb66040c8e94995a025f0c13957f147
	  System UUID:                7bb66040c8e94995a025f0c13957f147
	  Boot ID:                    66a6c4f9-cfd2-4253-b33b-886e3e439253
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.4
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-7bkqz                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         37s
	  kube-system                 coredns-66bff467f8-h4xvc                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     69s
	  kube-system                 etcd-ingress-addon-legacy-717000                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         78s
	  kube-system                 kube-apiserver-ingress-addon-legacy-717000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         78s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-717000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         78s
	  kube-system                 kube-proxy-tbz4v                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         69s
	  kube-system                 kube-scheduler-ingress-addon-legacy-717000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         77s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         69s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             70Mi (1%!)(MISSING)   170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From        Message
	  ----    ------                   ----  ----        -------
	  Normal  Starting                 78s   kubelet     Starting kubelet.
	  Normal  NodeAllocatableEnforced  78s   kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  78s   kubelet     Node ingress-addon-legacy-717000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    78s   kubelet     Node ingress-addon-legacy-717000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     78s   kubelet     Node ingress-addon-legacy-717000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                78s   kubelet     Node ingress-addon-legacy-717000 status is now: NodeReady
	  Normal  Starting                 67s   kube-proxy  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [Aug21 11:18] ACPI: SRAT not present
	[  +0.000000] KASLR disabled due to lack of seed
	[  +0.663451] EINJ: EINJ table not found.
	[  +0.514479] systemd-fstab-generator[117]: Ignoring "noauto" for root device
	[  +0.043141] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000915] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +6.172092] systemd-fstab-generator[493]: Ignoring "noauto" for root device
	[  +0.086806] systemd-fstab-generator[504]: Ignoring "noauto" for root device
	[  +0.434614] systemd-fstab-generator[719]: Ignoring "noauto" for root device
	[  +0.184317] systemd-fstab-generator[847]: Ignoring "noauto" for root device
	[  +0.076154] systemd-fstab-generator[858]: Ignoring "noauto" for root device
	[  +0.082685] systemd-fstab-generator[871]: Ignoring "noauto" for root device
	[  +4.298396] systemd-fstab-generator[1076]: Ignoring "noauto" for root device
	[  +1.457455] kauditd_printk_skb: 53 callbacks suppressed
	[  +3.173895] systemd-fstab-generator[1544]: Ignoring "noauto" for root device
	[Aug21 11:19] kauditd_printk_skb: 29 callbacks suppressed
	[  +0.078796] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[  +5.896649] systemd-fstab-generator[2653]: Ignoring "noauto" for root device
	[ +15.954131] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.886234] kauditd_printk_skb: 13 callbacks suppressed
	[  +1.141324] TCP: eth0: Driver has suspect GRO implementation, TCP performance may be compromised.
	[Aug21 11:20] kauditd_printk_skb: 3 callbacks suppressed
	
	* 
	* ==> etcd [2418cdac816a] <==
	* raft2023/08/21 11:19:01 INFO: ed054832bd1917e1 became follower at term 0
	raft2023/08/21 11:19:01 INFO: newRaft ed054832bd1917e1 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	raft2023/08/21 11:19:01 INFO: ed054832bd1917e1 became follower at term 1
	raft2023/08/21 11:19:01 INFO: ed054832bd1917e1 switched to configuration voters=(17079136544630577121)
	2023-08-21 11:19:02.150485 W | auth: simple token is not cryptographically signed
	2023-08-21 11:19:02.200073 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2023-08-21 11:19:02.264351 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-08-21 11:19:02.264453 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-08-21 11:19:02.264567 I | embed: listening for peers on 192.168.105.6:2380
	2023-08-21 11:19:02.264941 I | etcdserver: ed054832bd1917e1 as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2023/08/21 11:19:02 INFO: ed054832bd1917e1 switched to configuration voters=(17079136544630577121)
	2023-08-21 11:19:02.265196 I | etcdserver/membership: added member ed054832bd1917e1 [https://192.168.105.6:2380] to cluster 45a39c2c59b0edf4
	raft2023/08/21 11:19:02 INFO: ed054832bd1917e1 is starting a new election at term 1
	raft2023/08/21 11:19:02 INFO: ed054832bd1917e1 became candidate at term 2
	raft2023/08/21 11:19:02 INFO: ed054832bd1917e1 received MsgVoteResp from ed054832bd1917e1 at term 2
	raft2023/08/21 11:19:02 INFO: ed054832bd1917e1 became leader at term 2
	raft2023/08/21 11:19:02 INFO: raft.node: ed054832bd1917e1 elected leader ed054832bd1917e1 at term 2
	2023-08-21 11:19:02.834768 I | etcdserver: published {Name:ingress-addon-legacy-717000 ClientURLs:[https://192.168.105.6:2379]} to cluster 45a39c2c59b0edf4
	2023-08-21 11:19:02.834834 I | embed: ready to serve client requests
	2023-08-21 11:19:02.835534 I | embed: serving client requests on 192.168.105.6:2379
	2023-08-21 11:19:02.835615 I | etcdserver: setting up the initial cluster version to 3.4
	2023-08-21 11:19:02.835797 I | embed: ready to serve client requests
	2023-08-21 11:19:02.836290 I | embed: serving client requests on 127.0.0.1:2379
	2023-08-21 11:19:02.843293 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-08-21 11:19:02.843380 I | etcdserver/api: enabled capabilities for version 3.4
	
	* 
	* ==> kernel <==
	*  11:20:31 up 1 min,  0 users,  load average: 0.40, 0.17, 0.06
	Linux ingress-addon-legacy-717000 5.10.57 #1 SMP PREEMPT Fri Jul 14 22:49:12 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [b0e705dfecd3] <==
	* I0821 11:19:04.276864       1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
	E0821 11:19:04.283446       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.105.6, ResourceVersion: 0, AdditionalErrorMsg: 
	I0821 11:19:04.363713       1 cache.go:39] Caches are synced for autoregister controller
	I0821 11:19:04.364958       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I0821 11:19:04.365009       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0821 11:19:04.365044       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0821 11:19:04.365714       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I0821 11:19:05.263892       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0821 11:19:05.264229       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0821 11:19:05.275984       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I0821 11:19:05.282851       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I0821 11:19:05.282892       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I0821 11:19:05.419197       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0821 11:19:05.429952       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0821 11:19:05.533240       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.105.6]
	I0821 11:19:05.533727       1 controller.go:609] quota admission added evaluator for: endpoints
	I0821 11:19:05.535693       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0821 11:19:06.569272       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I0821 11:19:07.145874       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I0821 11:19:07.331153       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I0821 11:19:13.538323       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I0821 11:19:22.521052       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I0821 11:19:22.576924       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I0821 11:19:25.439528       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I0821 11:19:54.212348       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	
	* 
	* ==> kube-controller-manager [7e27fa5ce993] <==
	* W0821 11:19:22.523493       1 node_lifecycle_controller.go:1048] Missing timestamp for Node ingress-addon-legacy-717000. Assuming now as a timestamp.
	I0821 11:19:22.523561       1 node_lifecycle_controller.go:1249] Controller detected that zone  is now in state Normal.
	I0821 11:19:22.523651       1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ingress-addon-legacy-717000", UID:"e1824c21-d0a2-4541-94dc-7730fe2830b1", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node ingress-addon-legacy-717000 event: Registered Node ingress-addon-legacy-717000 in Controller
	I0821 11:19:22.523726       1 taint_manager.go:187] Starting NoExecuteTaintManager
	I0821 11:19:22.526203       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"a4227c76-f2a6-4e05-b382-850f5c18ad56", APIVersion:"apps/v1", ResourceVersion:"214", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-tbz4v
	I0821 11:19:22.551404       1 shared_informer.go:230] Caches are synced for ReplicaSet 
	I0821 11:19:22.559735       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0821 11:19:22.571000       1 shared_informer.go:230] Caches are synced for resource quota 
	I0821 11:19:22.575520       1 shared_informer.go:230] Caches are synced for deployment 
	I0821 11:19:22.579874       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"8c98ca2c-ad2b-459d-832c-8f389e799404", APIVersion:"apps/v1", ResourceVersion:"310", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-66bff467f8 to 1
	I0821 11:19:22.583343       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"0d13f708-c22b-4aa0-b3a7-29566c0a7906", APIVersion:"apps/v1", ResourceVersion:"339", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-h4xvc
	I0821 11:19:22.588238       1 shared_informer.go:230] Caches are synced for disruption 
	I0821 11:19:22.588258       1 disruption.go:339] Sending events to api server.
	I0821 11:19:22.619831       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0821 11:19:22.619844       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0821 11:19:22.629225       1 shared_informer.go:230] Caches are synced for resource quota 
	I0821 11:19:25.436357       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"dd23ad86-af60-49ee-82c5-3863a27552e0", APIVersion:"apps/v1", ResourceVersion:"403", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I0821 11:19:25.443983       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"f6f54e21-7ff2-43e6-923b-29874f635d55", APIVersion:"apps/v1", ResourceVersion:"404", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-47g74
	I0821 11:19:25.444467       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"46ca1497-9a36-4568-b33d-d1de95a8b7d8", APIVersion:"batch/v1", ResourceVersion:"405", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-nkpgl
	I0821 11:19:25.478361       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"af6c6e08-abc6-4bcb-b721-4ad6e1431566", APIVersion:"batch/v1", ResourceVersion:"418", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-xk6hz
	I0821 11:19:28.712672       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"46ca1497-9a36-4568-b33d-d1de95a8b7d8", APIVersion:"batch/v1", ResourceVersion:"419", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0821 11:19:29.724871       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"af6c6e08-abc6-4bcb-b721-4ad6e1431566", APIVersion:"batch/v1", ResourceVersion:"427", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0821 11:20:04.499847       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"5e0a0c49-54c3-48cb-870c-c37dfd2ed9f7", APIVersion:"apps/v1", ResourceVersion:"563", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I0821 11:20:04.515373       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"1d5d7e8a-b1fa-4579-a08b-d12724824bed", APIVersion:"apps/v1", ResourceVersion:"564", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-7bkqz
	E0821 11:20:28.844261       1 tokens_controller.go:261] error synchronizing serviceaccount ingress-nginx/default: secrets "default-token-dvdcz" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated
	
	* 
	* ==> kube-proxy [3d1d800f4ece] <==
	* W0821 11:19:24.561628       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I0821 11:19:24.570984       1 node.go:136] Successfully retrieved node IP: 192.168.105.6
	I0821 11:19:24.571012       1 server_others.go:186] Using iptables Proxier.
	I0821 11:19:24.571159       1 server.go:583] Version: v1.18.20
	I0821 11:19:24.572492       1 config.go:133] Starting endpoints config controller
	I0821 11:19:24.572506       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I0821 11:19:24.572685       1 config.go:315] Starting service config controller
	I0821 11:19:24.572688       1 shared_informer.go:223] Waiting for caches to sync for service config
	I0821 11:19:24.677614       1 shared_informer.go:230] Caches are synced for endpoints config 
	I0821 11:19:24.677686       1 shared_informer.go:230] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [ae3cab0c7565] <==
	* I0821 11:19:04.303040       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0821 11:19:04.303124       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0821 11:19:04.304318       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I0821 11:19:04.304394       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0821 11:19:04.304434       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0821 11:19:04.304474       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0821 11:19:04.313697       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0821 11:19:04.313787       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0821 11:19:04.313848       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0821 11:19:04.313885       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0821 11:19:04.313941       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0821 11:19:04.313977       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0821 11:19:04.314026       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0821 11:19:04.314062       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0821 11:19:04.319690       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0821 11:19:04.319760       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0821 11:19:04.319815       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0821 11:19:04.319853       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0821 11:19:05.137523       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0821 11:19:05.178371       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0821 11:19:05.184291       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0821 11:19:05.216513       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0821 11:19:05.259018       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0821 11:19:05.604590       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E0821 11:19:22.371706       1 factory.go:503] pod: kube-system/storage-provisioner is already present in the active queue
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-08-21 11:18:37 UTC, ends at Mon 2023-08-21 11:20:31 UTC. --
	Aug 21 11:20:09 ingress-addon-legacy-717000 kubelet[2659]: I0821 11:20:09.247459    2659 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 30ad566f2956f5d0aa078c6f9f0b3b8ef002b2cc71eb7969e31f9344be5902d3
	Aug 21 11:20:09 ingress-addon-legacy-717000 kubelet[2659]: E0821 11:20:09.247866    2659 pod_workers.go:191] Error syncing pod 3a49a1b6-b2db-4a7d-bbd2-f7a5eb7756c0 ("hello-world-app-5f5d8b66bb-7bkqz_default(3a49a1b6-b2db-4a7d-bbd2-f7a5eb7756c0)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-7bkqz_default(3a49a1b6-b2db-4a7d-bbd2-f7a5eb7756c0)"
	Aug 21 11:20:11 ingress-addon-legacy-717000 kubelet[2659]: I0821 11:20:11.605303    2659 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: e702971b8b4b06912bb1f22b62626200f5169a02202ef454a4809e24d4f144ad
	Aug 21 11:20:11 ingress-addon-legacy-717000 kubelet[2659]: E0821 11:20:11.606615    2659 pod_workers.go:191] Error syncing pod a81d8290-a54b-4c32-915e-b1281e35f808 ("kube-ingress-dns-minikube_kube-system(a81d8290-a54b-4c32-915e-b1281e35f808)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with CrashLoopBackOff: "back-off 20s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(a81d8290-a54b-4c32-915e-b1281e35f808)"
	Aug 21 11:20:19 ingress-addon-legacy-717000 kubelet[2659]: I0821 11:20:19.866039    2659 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-ljwjq" (UniqueName: "kubernetes.io/secret/a81d8290-a54b-4c32-915e-b1281e35f808-minikube-ingress-dns-token-ljwjq") pod "a81d8290-a54b-4c32-915e-b1281e35f808" (UID: "a81d8290-a54b-4c32-915e-b1281e35f808")
	Aug 21 11:20:19 ingress-addon-legacy-717000 kubelet[2659]: I0821 11:20:19.867671    2659 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a81d8290-a54b-4c32-915e-b1281e35f808-minikube-ingress-dns-token-ljwjq" (OuterVolumeSpecName: "minikube-ingress-dns-token-ljwjq") pod "a81d8290-a54b-4c32-915e-b1281e35f808" (UID: "a81d8290-a54b-4c32-915e-b1281e35f808"). InnerVolumeSpecName "minikube-ingress-dns-token-ljwjq". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Aug 21 11:20:19 ingress-addon-legacy-717000 kubelet[2659]: I0821 11:20:19.966230    2659 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-ljwjq" (UniqueName: "kubernetes.io/secret/a81d8290-a54b-4c32-915e-b1281e35f808-minikube-ingress-dns-token-ljwjq") on node "ingress-addon-legacy-717000" DevicePath ""
	Aug 21 11:20:22 ingress-addon-legacy-717000 kubelet[2659]: I0821 11:20:22.472544    2659 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: e702971b8b4b06912bb1f22b62626200f5169a02202ef454a4809e24d4f144ad
	Aug 21 11:20:22 ingress-addon-legacy-717000 kubelet[2659]: I0821 11:20:22.603208    2659 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 30ad566f2956f5d0aa078c6f9f0b3b8ef002b2cc71eb7969e31f9344be5902d3
	Aug 21 11:20:22 ingress-addon-legacy-717000 kubelet[2659]: W0821 11:20:22.704369    2659 container.go:412] Failed to create summary reader for "/kubepods/besteffort/pod3a49a1b6-b2db-4a7d-bbd2-f7a5eb7756c0/37ae92a134c091854caf9967ba43435598a27c82f11e99198826a3ec55b37b4a": none of the resources are being tracked.
	Aug 21 11:20:23 ingress-addon-legacy-717000 kubelet[2659]: W0821 11:20:23.500330    2659 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-world-app-5f5d8b66bb-7bkqz through plugin: invalid network status for
	Aug 21 11:20:23 ingress-addon-legacy-717000 kubelet[2659]: I0821 11:20:23.507473    2659 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 30ad566f2956f5d0aa078c6f9f0b3b8ef002b2cc71eb7969e31f9344be5902d3
	Aug 21 11:20:23 ingress-addon-legacy-717000 kubelet[2659]: I0821 11:20:23.508024    2659 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 37ae92a134c091854caf9967ba43435598a27c82f11e99198826a3ec55b37b4a
	Aug 21 11:20:23 ingress-addon-legacy-717000 kubelet[2659]: E0821 11:20:23.508617    2659 pod_workers.go:191] Error syncing pod 3a49a1b6-b2db-4a7d-bbd2-f7a5eb7756c0 ("hello-world-app-5f5d8b66bb-7bkqz_default(3a49a1b6-b2db-4a7d-bbd2-f7a5eb7756c0)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-7bkqz_default(3a49a1b6-b2db-4a7d-bbd2-f7a5eb7756c0)"
	Aug 21 11:20:24 ingress-addon-legacy-717000 kubelet[2659]: E0821 11:20:24.096341    2659 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-47g74.177d62089d0c7dbc", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-47g74", UID:"cd5103ae-a7a5-498c-8dd2-b3977fec93d4", APIVersion:"v1", ResourceVersion:"415", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-717000"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc130effe05adcdbc, ext:76965616255, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc130effe05adcdbc, ext:76965616255, loc:(*time.Location)(0x6a0ef20)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-47g74.177d62089d0c7dbc" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Aug 21 11:20:24 ingress-addon-legacy-717000 kubelet[2659]: E0821 11:20:24.108217    2659 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-47g74.177d62089d0c7dbc", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-47g74", UID:"cd5103ae-a7a5-498c-8dd2-b3977fec93d4", APIVersion:"v1", ResourceVersion:"415", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-717000"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc130effe05adcdbc, ext:76965616255, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc130effe05fea940, ext:76970915331, loc:(*time.Location)(0x6a0ef20)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-47g74.177d62089d0c7dbc" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Aug 21 11:20:24 ingress-addon-legacy-717000 kubelet[2659]: W0821 11:20:24.539756    2659 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-world-app-5f5d8b66bb-7bkqz through plugin: invalid network status for
	Aug 21 11:20:26 ingress-addon-legacy-717000 kubelet[2659]: I0821 11:20:26.224166    2659 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/cd5103ae-a7a5-498c-8dd2-b3977fec93d4-webhook-cert") pod "cd5103ae-a7a5-498c-8dd2-b3977fec93d4" (UID: "cd5103ae-a7a5-498c-8dd2-b3977fec93d4")
	Aug 21 11:20:26 ingress-addon-legacy-717000 kubelet[2659]: I0821 11:20:26.224189    2659 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-5r7kw" (UniqueName: "kubernetes.io/secret/cd5103ae-a7a5-498c-8dd2-b3977fec93d4-ingress-nginx-token-5r7kw") pod "cd5103ae-a7a5-498c-8dd2-b3977fec93d4" (UID: "cd5103ae-a7a5-498c-8dd2-b3977fec93d4")
	Aug 21 11:20:26 ingress-addon-legacy-717000 kubelet[2659]: I0821 11:20:26.230094    2659 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd5103ae-a7a5-498c-8dd2-b3977fec93d4-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "cd5103ae-a7a5-498c-8dd2-b3977fec93d4" (UID: "cd5103ae-a7a5-498c-8dd2-b3977fec93d4"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Aug 21 11:20:26 ingress-addon-legacy-717000 kubelet[2659]: I0821 11:20:26.230242    2659 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd5103ae-a7a5-498c-8dd2-b3977fec93d4-ingress-nginx-token-5r7kw" (OuterVolumeSpecName: "ingress-nginx-token-5r7kw") pod "cd5103ae-a7a5-498c-8dd2-b3977fec93d4" (UID: "cd5103ae-a7a5-498c-8dd2-b3977fec93d4"). InnerVolumeSpecName "ingress-nginx-token-5r7kw". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Aug 21 11:20:26 ingress-addon-legacy-717000 kubelet[2659]: I0821 11:20:26.324337    2659 reconciler.go:319] Volume detached for volume "ingress-nginx-token-5r7kw" (UniqueName: "kubernetes.io/secret/cd5103ae-a7a5-498c-8dd2-b3977fec93d4-ingress-nginx-token-5r7kw") on node "ingress-addon-legacy-717000" DevicePath ""
	Aug 21 11:20:26 ingress-addon-legacy-717000 kubelet[2659]: I0821 11:20:26.324359    2659 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/cd5103ae-a7a5-498c-8dd2-b3977fec93d4-webhook-cert") on node "ingress-addon-legacy-717000" DevicePath ""
	Aug 21 11:20:26 ingress-addon-legacy-717000 kubelet[2659]: W0821 11:20:26.585458    2659 pod_container_deletor.go:77] Container "6a80a9c44c8bfc2f768776e0d8e76dadad0bc49a8fe75beed0fb8e5e48903237" not found in pod's containers
	Aug 21 11:20:27 ingress-addon-legacy-717000 kubelet[2659]: W0821 11:20:27.631248    2659 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/cd5103ae-a7a5-498c-8dd2-b3977fec93d4/volumes" does not exist
	
	* 
	* ==> storage-provisioner [900fda56bba0] <==
	* I0821 11:19:26.027470       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0821 11:19:26.041074       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0821 11:19:26.041140       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0821 11:19:26.045149       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0821 11:19:26.045565       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-717000_a4ad4258-ad09-4eae-942b-e68bdda499ce!
	I0821 11:19:26.046561       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"32e74602-fd7b-4f7d-ba84-ad2fe0e9e25c", APIVersion:"v1", ResourceVersion:"435", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-717000_a4ad4258-ad09-4eae-942b-e68bdda499ce became leader
	I0821 11:19:26.147051       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-717000_a4ad4258-ad09-4eae-942b-e68bdda499ce!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p ingress-addon-legacy-717000 -n ingress-addon-legacy-717000
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-717000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (52.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.34s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-574000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-574000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (10.273617541s)

                                                
                                                
-- stdout --
	* [mount-start-1-574000] minikube v1.31.2 on Darwin 13.5 (arm64)
	  - MINIKUBE_LOCATION=17102
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17102-920/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17102-920/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-574000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-574000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-574000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-574000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-574000 -n mount-start-1-574000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-574000 -n mount-start-1-574000: exit status 7 (67.203416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-574000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.34s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (9.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-806000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
E0821 04:23:08.051871    1362 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/functional-818000/client.crt: no such file or directory
multinode_test.go:85: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-806000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.740701875s)

                                                
                                                
-- stdout --
	* [multinode-806000] minikube v1.31.2 on Darwin 13.5 (arm64)
	  - MINIKUBE_LOCATION=17102
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17102-920/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17102-920/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node multinode-806000 in cluster multinode-806000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-806000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0821 04:23:06.506687    3741 out.go:296] Setting OutFile to fd 1 ...
	I0821 04:23:06.506808    3741 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 04:23:06.506811    3741 out.go:309] Setting ErrFile to fd 2...
	I0821 04:23:06.506814    3741 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 04:23:06.506915    3741 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17102-920/.minikube/bin
	I0821 04:23:06.507925    3741 out.go:303] Setting JSON to false
	I0821 04:23:06.523107    3741 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3160,"bootTime":1692613826,"procs":412,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0821 04:23:06.523163    3741 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0821 04:23:06.528061    3741 out.go:177] * [multinode-806000] minikube v1.31.2 on Darwin 13.5 (arm64)
	I0821 04:23:06.534994    3741 out.go:177]   - MINIKUBE_LOCATION=17102
	I0821 04:23:06.535079    3741 notify.go:220] Checking for updates...
	I0821 04:23:06.539040    3741 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17102-920/kubeconfig
	I0821 04:23:06.541965    3741 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0821 04:23:06.545025    3741 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0821 04:23:06.548055    3741 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17102-920/.minikube
	I0821 04:23:06.550968    3741 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0821 04:23:06.554176    3741 driver.go:373] Setting default libvirt URI to qemu:///system
	I0821 04:23:06.557984    3741 out.go:177] * Using the qemu2 driver based on user configuration
	I0821 04:23:06.564957    3741 start.go:298] selected driver: qemu2
	I0821 04:23:06.564964    3741 start.go:902] validating driver "qemu2" against <nil>
	I0821 04:23:06.564969    3741 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0821 04:23:06.567912    3741 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0821 04:23:06.571083    3741 out.go:177] * Automatically selected the socket_vmnet network
	I0821 04:23:06.574112    3741 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0821 04:23:06.574151    3741 cni.go:84] Creating CNI manager for ""
	I0821 04:23:06.574155    3741 cni.go:136] 0 nodes found, recommending kindnet
	I0821 04:23:06.574159    3741 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0821 04:23:06.574163    3741 start_flags.go:319] config:
	{Name:multinode-806000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:multinode-806000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPl
ugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0821 04:23:06.578414    3741 iso.go:125] acquiring lock: {Name:mk813ea611542195bb0511881888be3fabc72ff6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 04:23:06.586090    3741 out.go:177] * Starting control plane node multinode-806000 in cluster multinode-806000
	I0821 04:23:06.589982    3741 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime docker
	I0821 04:23:06.589997    3741 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17102-920/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4
	I0821 04:23:06.590007    3741 cache.go:57] Caching tarball of preloaded images
	I0821 04:23:06.590050    3741 preload.go:174] Found /Users/jenkins/minikube-integration/17102-920/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0821 04:23:06.590055    3741 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.4 on docker
	I0821 04:23:06.590231    3741 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/multinode-806000/config.json ...
	I0821 04:23:06.590244    3741 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/multinode-806000/config.json: {Name:mk768e8f21f6ac29e3f420f8fa42cc02f7f7e11a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 04:23:06.590420    3741 start.go:365] acquiring machines lock for multinode-806000: {Name:mk9b32d9fe994be32d77812db464b2cfa7bfb400 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0821 04:23:06.590452    3741 start.go:369] acquired machines lock for "multinode-806000" in 22.292µs
	I0821 04:23:06.590462    3741 start.go:93] Provisioning new machine with config: &{Name:multinode-806000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterNa
me:multinode-806000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0821 04:23:06.590497    3741 start.go:125] createHost starting for "" (driver="qemu2")
	I0821 04:23:06.594023    3741 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0821 04:23:06.608830    3741 start.go:159] libmachine.API.Create for "multinode-806000" (driver="qemu2")
	I0821 04:23:06.608876    3741 client.go:168] LocalClient.Create starting
	I0821 04:23:06.608966    3741 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca.pem
	I0821 04:23:06.609007    3741 main.go:141] libmachine: Decoding PEM data...
	I0821 04:23:06.609021    3741 main.go:141] libmachine: Parsing certificate...
	I0821 04:23:06.609062    3741 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17102-920/.minikube/certs/cert.pem
	I0821 04:23:06.609083    3741 main.go:141] libmachine: Decoding PEM data...
	I0821 04:23:06.609092    3741 main.go:141] libmachine: Parsing certificate...
	I0821 04:23:06.609448    3741 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17102-920/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17102-920/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0821 04:23:06.728859    3741 main.go:141] libmachine: Creating SSH key...
	I0821 04:23:06.763138    3741 main.go:141] libmachine: Creating Disk image...
	I0821 04:23:06.763148    3741 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0821 04:23:06.763302    3741 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17102-920/.minikube/machines/multinode-806000/disk.qcow2.raw /Users/jenkins/minikube-integration/17102-920/.minikube/machines/multinode-806000/disk.qcow2
	I0821 04:23:06.771751    3741 main.go:141] libmachine: STDOUT: 
	I0821 04:23:06.771763    3741 main.go:141] libmachine: STDERR: 
	I0821 04:23:06.771838    3741 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17102-920/.minikube/machines/multinode-806000/disk.qcow2 +20000M
	I0821 04:23:06.778998    3741 main.go:141] libmachine: STDOUT: Image resized.
	
	I0821 04:23:06.779010    3741 main.go:141] libmachine: STDERR: 
	I0821 04:23:06.779025    3741 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17102-920/.minikube/machines/multinode-806000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17102-920/.minikube/machines/multinode-806000/disk.qcow2
	I0821 04:23:06.779031    3741 main.go:141] libmachine: Starting QEMU VM...
	I0821 04:23:06.779068    3741 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17102-920/.minikube/machines/multinode-806000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/multinode-806000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17102-920/.minikube/machines/multinode-806000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:21:8d:5d:be:36 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17102-920/.minikube/machines/multinode-806000/disk.qcow2
	I0821 04:23:06.780554    3741 main.go:141] libmachine: STDOUT: 
	I0821 04:23:06.780565    3741 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0821 04:23:06.780585    3741 client.go:171] LocalClient.Create took 171.702708ms
	I0821 04:23:08.782774    3741 start.go:128] duration metric: createHost completed in 2.192278583s
	I0821 04:23:08.782823    3741 start.go:83] releasing machines lock for "multinode-806000", held for 2.192381375s
	W0821 04:23:08.782873    3741 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0821 04:23:08.792147    3741 out.go:177] * Deleting "multinode-806000" in qemu2 ...
	W0821 04:23:08.812106    3741 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0821 04:23:08.812130    3741 start.go:687] Will try again in 5 seconds ...
	I0821 04:23:13.814310    3741 start.go:365] acquiring machines lock for multinode-806000: {Name:mk9b32d9fe994be32d77812db464b2cfa7bfb400 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0821 04:23:13.814737    3741 start.go:369] acquired machines lock for "multinode-806000" in 310.833µs
	I0821 04:23:13.814852    3741 start.go:93] Provisioning new machine with config: &{Name:multinode-806000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterNa
me:multinode-806000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0821 04:23:13.815221    3741 start.go:125] createHost starting for "" (driver="qemu2")
	I0821 04:23:13.828104    3741 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0821 04:23:13.875613    3741 start.go:159] libmachine.API.Create for "multinode-806000" (driver="qemu2")
	I0821 04:23:13.875697    3741 client.go:168] LocalClient.Create starting
	I0821 04:23:13.875813    3741 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca.pem
	I0821 04:23:13.875867    3741 main.go:141] libmachine: Decoding PEM data...
	I0821 04:23:13.875885    3741 main.go:141] libmachine: Parsing certificate...
	I0821 04:23:13.875951    3741 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17102-920/.minikube/certs/cert.pem
	I0821 04:23:13.875991    3741 main.go:141] libmachine: Decoding PEM data...
	I0821 04:23:13.876003    3741 main.go:141] libmachine: Parsing certificate...
	I0821 04:23:13.876552    3741 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17102-920/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17102-920/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0821 04:23:14.032910    3741 main.go:141] libmachine: Creating SSH key...
	I0821 04:23:14.162541    3741 main.go:141] libmachine: Creating Disk image...
	I0821 04:23:14.162549    3741 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0821 04:23:14.162704    3741 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17102-920/.minikube/machines/multinode-806000/disk.qcow2.raw /Users/jenkins/minikube-integration/17102-920/.minikube/machines/multinode-806000/disk.qcow2
	I0821 04:23:14.171307    3741 main.go:141] libmachine: STDOUT: 
	I0821 04:23:14.171321    3741 main.go:141] libmachine: STDERR: 
	I0821 04:23:14.171372    3741 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17102-920/.minikube/machines/multinode-806000/disk.qcow2 +20000M
	I0821 04:23:14.178534    3741 main.go:141] libmachine: STDOUT: Image resized.
	
	I0821 04:23:14.178546    3741 main.go:141] libmachine: STDERR: 
	I0821 04:23:14.178560    3741 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17102-920/.minikube/machines/multinode-806000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17102-920/.minikube/machines/multinode-806000/disk.qcow2
	I0821 04:23:14.178572    3741 main.go:141] libmachine: Starting QEMU VM...
	I0821 04:23:14.178615    3741 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17102-920/.minikube/machines/multinode-806000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/multinode-806000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17102-920/.minikube/machines/multinode-806000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:da:03:48:9a:56 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17102-920/.minikube/machines/multinode-806000/disk.qcow2
	I0821 04:23:14.180117    3741 main.go:141] libmachine: STDOUT: 
	I0821 04:23:14.180130    3741 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0821 04:23:14.180143    3741 client.go:171] LocalClient.Create took 304.441833ms
	I0821 04:23:16.182353    3741 start.go:128] duration metric: createHost completed in 2.36711975s
	I0821 04:23:16.182429    3741 start.go:83] releasing machines lock for "multinode-806000", held for 2.367690708s
	W0821 04:23:16.182959    3741 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-806000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-806000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0821 04:23:16.192706    3741 out.go:177] 
	W0821 04:23:16.195713    3741 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0821 04:23:16.195752    3741 out.go:239] * 
	* 
	W0821 04:23:16.198328    3741 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0821 04:23:16.207638    3741 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:87: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-806000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-806000 -n multinode-806000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-806000 -n multinode-806000: exit status 7 (67.650625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-806000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (9.81s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (84.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-806000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:481: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-806000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (126.529625ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-806000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:483: failed to create busybox deployment to multinode cluster
multinode_test.go:486: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-806000 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-806000 -- rollout status deployment/busybox: exit status 1 (54.880917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-806000"

                                                
                                                
** /stderr **
multinode_test.go:488: failed to deploy busybox to multinode cluster
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-806000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-806000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (55.106375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-806000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-806000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-806000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.994416ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-806000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-806000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-806000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.269125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-806000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-806000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-806000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.69175ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-806000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-806000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-806000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (98.842625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-806000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-806000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-806000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (100.992042ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-806000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-806000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-806000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.180791ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-806000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-806000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-806000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (100.820417ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-806000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-806000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-806000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.210459ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-806000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
E0821 04:24:29.955921    1362 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/functional-818000/client.crt: no such file or directory
E0821 04:24:39.171173    1362 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/ingress-addon-legacy-717000/client.crt: no such file or directory
E0821 04:24:39.177508    1362 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/ingress-addon-legacy-717000/client.crt: no such file or directory
E0821 04:24:39.189574    1362 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/ingress-addon-legacy-717000/client.crt: no such file or directory
E0821 04:24:39.211665    1362 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/ingress-addon-legacy-717000/client.crt: no such file or directory
E0821 04:24:39.253711    1362 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/ingress-addon-legacy-717000/client.crt: no such file or directory
E0821 04:24:39.335773    1362 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/ingress-addon-legacy-717000/client.crt: no such file or directory
E0821 04:24:39.496003    1362 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/ingress-addon-legacy-717000/client.crt: no such file or directory
E0821 04:24:39.817957    1362 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/ingress-addon-legacy-717000/client.crt: no such file or directory
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-806000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-806000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (100.226625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-806000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:512: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:516: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-806000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:516: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-806000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (53.839208ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-806000"

                                                
                                                
** /stderr **
multinode_test.go:518: failed get Pod names
multinode_test.go:524: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-806000 -- exec  -- nslookup kubernetes.io
E0821 04:24:40.459873    1362 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/ingress-addon-legacy-717000/client.crt: no such file or directory
multinode_test.go:524: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-806000 -- exec  -- nslookup kubernetes.io: exit status 1 (54.228917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-806000"

                                                
                                                
** /stderr **
multinode_test.go:526: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-806000 -- exec  -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-806000 -- exec  -- nslookup kubernetes.default: exit status 1 (53.877042ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-806000"

                                                
                                                
** /stderr **
multinode_test.go:536: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:542: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-806000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-806000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (53.436458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-806000"

                                                
                                                
** /stderr **
multinode_test.go:544: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-806000 -n multinode-806000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-806000 -n multinode-806000: exit status 7 (28.834291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-806000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (84.36s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-806000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-806000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (53.006292ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-806000"

                                                
                                                
** /stderr **
multinode_test.go:554: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-806000 -n multinode-806000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-806000 -n multinode-806000: exit status 7 (28.637334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-806000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-806000 -v 3 --alsologtostderr
multinode_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-806000 -v 3 --alsologtostderr: exit status 89 (40.119208ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p multinode-806000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0821 04:24:40.736574    3819 out.go:296] Setting OutFile to fd 1 ...
	I0821 04:24:40.736750    3819 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 04:24:40.736753    3819 out.go:309] Setting ErrFile to fd 2...
	I0821 04:24:40.736755    3819 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 04:24:40.736867    3819 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17102-920/.minikube/bin
	I0821 04:24:40.737088    3819 mustload.go:65] Loading cluster: multinode-806000
	I0821 04:24:40.737257    3819 config.go:182] Loaded profile config "multinode-806000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0821 04:24:40.742315    3819 out.go:177] * The control plane node must be running for this command
	I0821 04:24:40.746305    3819 out.go:177]   To start a cluster, run: "minikube start -p multinode-806000"

                                                
                                                
** /stderr **
multinode_test.go:112: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-806000 -v 3 --alsologtostderr" : exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-806000 -n multinode-806000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-806000 -n multinode-806000: exit status 7 (29.260792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-806000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:155: expected profile "multinode-806000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-806000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-806000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"VMDriver\":\"\",\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServer
Port\":0,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.27.4\",\"ClusterName\":\"multinode-806000\",\"Namespace\":\"default\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\",\"NodeIP\":\"\",\"NodePort\":8443,\"NodeName\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesV
ersion\":\"v1.27.4\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\
":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-806000 -n multinode-806000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-806000 -n multinode-806000: exit status 7 (32.787292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-806000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.16s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-806000 status --output json --alsologtostderr
multinode_test.go:173: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-806000 status --output json --alsologtostderr: exit status 7 (28.579375ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-806000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0821 04:24:40.965195    3829 out.go:296] Setting OutFile to fd 1 ...
	I0821 04:24:40.965333    3829 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 04:24:40.965336    3829 out.go:309] Setting ErrFile to fd 2...
	I0821 04:24:40.965338    3829 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 04:24:40.965450    3829 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17102-920/.minikube/bin
	I0821 04:24:40.965562    3829 out.go:303] Setting JSON to true
	I0821 04:24:40.965585    3829 mustload.go:65] Loading cluster: multinode-806000
	I0821 04:24:40.965634    3829 notify.go:220] Checking for updates...
	I0821 04:24:40.965763    3829 config.go:182] Loaded profile config "multinode-806000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0821 04:24:40.965768    3829 status.go:255] checking status of multinode-806000 ...
	I0821 04:24:40.965948    3829 status.go:330] multinode-806000 host status = "Stopped" (err=<nil>)
	I0821 04:24:40.965951    3829 status.go:343] host is not running, skipping remaining checks
	I0821 04:24:40.965953    3829 status.go:257] multinode-806000 status: &{Name:multinode-806000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:180: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-806000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-806000 -n multinode-806000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-806000 -n multinode-806000: exit status 7 (28.719583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-806000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-806000 node stop m03
multinode_test.go:210: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-806000 node stop m03: exit status 85 (47.306792ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:212: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-806000 node stop m03": exit status 85
multinode_test.go:216: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-806000 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-806000 status: exit status 7 (28.961209ms)

                                                
                                                
-- stdout --
	multinode-806000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-806000 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-806000 status --alsologtostderr: exit status 7 (28.633875ms)

                                                
                                                
-- stdout --
	multinode-806000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0821 04:24:41.099532    3837 out.go:296] Setting OutFile to fd 1 ...
	I0821 04:24:41.099696    3837 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 04:24:41.099699    3837 out.go:309] Setting ErrFile to fd 2...
	I0821 04:24:41.099701    3837 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 04:24:41.099834    3837 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17102-920/.minikube/bin
	I0821 04:24:41.099954    3837 out.go:303] Setting JSON to false
	I0821 04:24:41.099969    3837 mustload.go:65] Loading cluster: multinode-806000
	I0821 04:24:41.100032    3837 notify.go:220] Checking for updates...
	I0821 04:24:41.100149    3837 config.go:182] Loaded profile config "multinode-806000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0821 04:24:41.100154    3837 status.go:255] checking status of multinode-806000 ...
	I0821 04:24:41.100341    3837 status.go:330] multinode-806000 host status = "Stopped" (err=<nil>)
	I0821 04:24:41.100344    3837 status.go:343] host is not running, skipping remaining checks
	I0821 04:24:41.100346    3837 status.go:257] multinode-806000 status: &{Name:multinode-806000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:229: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-806000 status --alsologtostderr": multinode-806000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-806000 -n multinode-806000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-806000 -n multinode-806000: exit status 7 (28.141083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-806000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.13s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-806000 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-806000 node start m03 --alsologtostderr: exit status 85 (44.570916ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0821 04:24:41.156470    3841 out.go:296] Setting OutFile to fd 1 ...
	I0821 04:24:41.156675    3841 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 04:24:41.156678    3841 out.go:309] Setting ErrFile to fd 2...
	I0821 04:24:41.156680    3841 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 04:24:41.156784    3841 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17102-920/.minikube/bin
	I0821 04:24:41.157009    3841 mustload.go:65] Loading cluster: multinode-806000
	I0821 04:24:41.157202    3841 config.go:182] Loaded profile config "multinode-806000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0821 04:24:41.161399    3841 out.go:177] 
	W0821 04:24:41.164454    3841 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0821 04:24:41.164459    3841 out.go:239] * 
	* 
	W0821 04:24:41.165998    3841 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0821 04:24:41.169405    3841 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:256: I0821 04:24:41.156470    3841 out.go:296] Setting OutFile to fd 1 ...
I0821 04:24:41.156675    3841 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0821 04:24:41.156678    3841 out.go:309] Setting ErrFile to fd 2...
I0821 04:24:41.156680    3841 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0821 04:24:41.156784    3841 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17102-920/.minikube/bin
I0821 04:24:41.157009    3841 mustload.go:65] Loading cluster: multinode-806000
I0821 04:24:41.157202    3841 config.go:182] Loaded profile config "multinode-806000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
I0821 04:24:41.161399    3841 out.go:177] 
W0821 04:24:41.164454    3841 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0821 04:24:41.164459    3841 out.go:239] * 
* 
W0821 04:24:41.165998    3841 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0821 04:24:41.169405    3841 out.go:177] 
multinode_test.go:257: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-806000 node start m03 --alsologtostderr": exit status 85
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-806000 status
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-806000 status: exit status 7 (28.739834ms)

                                                
                                                
-- stdout --
	multinode-806000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:263: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-806000 status" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-806000 -n multinode-806000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-806000 -n multinode-806000: exit status 7 (28.195417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-806000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (5.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-806000
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-806000
multinode_test.go:295: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-806000 --wait=true -v=8 --alsologtostderr
E0821 04:24:41.741869    1362 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/ingress-addon-legacy-717000/client.crt: no such file or directory
E0821 04:24:44.303305    1362 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/ingress-addon-legacy-717000/client.crt: no such file or directory
multinode_test.go:295: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-806000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.174169709s)

                                                
                                                
-- stdout --
	* [multinode-806000] minikube v1.31.2 on Darwin 13.5 (arm64)
	  - MINIKUBE_LOCATION=17102
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17102-920/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17102-920/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node multinode-806000 in cluster multinode-806000
	* Restarting existing qemu2 VM for "multinode-806000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-806000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0821 04:24:41.344924    3851 out.go:296] Setting OutFile to fd 1 ...
	I0821 04:24:41.345029    3851 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 04:24:41.345031    3851 out.go:309] Setting ErrFile to fd 2...
	I0821 04:24:41.345033    3851 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 04:24:41.345136    3851 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17102-920/.minikube/bin
	I0821 04:24:41.346082    3851 out.go:303] Setting JSON to false
	I0821 04:24:41.361043    3851 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3255,"bootTime":1692613826,"procs":412,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0821 04:24:41.361113    3851 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0821 04:24:41.366330    3851 out.go:177] * [multinode-806000] minikube v1.31.2 on Darwin 13.5 (arm64)
	I0821 04:24:41.373337    3851 out.go:177]   - MINIKUBE_LOCATION=17102
	I0821 04:24:41.377285    3851 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17102-920/kubeconfig
	I0821 04:24:41.373363    3851 notify.go:220] Checking for updates...
	I0821 04:24:41.384310    3851 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0821 04:24:41.388308    3851 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0821 04:24:41.389644    3851 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17102-920/.minikube
	I0821 04:24:41.392335    3851 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0821 04:24:41.395649    3851 config.go:182] Loaded profile config "multinode-806000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0821 04:24:41.395705    3851 driver.go:373] Setting default libvirt URI to qemu:///system
	I0821 04:24:41.400142    3851 out.go:177] * Using the qemu2 driver based on existing profile
	I0821 04:24:41.407247    3851 start.go:298] selected driver: qemu2
	I0821 04:24:41.407253    3851 start.go:902] validating driver "qemu2" against &{Name:multinode-806000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:
multinode-806000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0821 04:24:41.407307    3851 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0821 04:24:41.409266    3851 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0821 04:24:41.409291    3851 cni.go:84] Creating CNI manager for ""
	I0821 04:24:41.409295    3851 cni.go:136] 1 nodes found, recommending kindnet
	I0821 04:24:41.409301    3851 start_flags.go:319] config:
	{Name:multinode-806000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:multinode-806000 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0821 04:24:41.413145    3851 iso.go:125] acquiring lock: {Name:mk813ea611542195bb0511881888be3fabc72ff6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 04:24:41.421259    3851 out.go:177] * Starting control plane node multinode-806000 in cluster multinode-806000
	I0821 04:24:41.425265    3851 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime docker
	I0821 04:24:41.425281    3851 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17102-920/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4
	I0821 04:24:41.425289    3851 cache.go:57] Caching tarball of preloaded images
	I0821 04:24:41.425335    3851 preload.go:174] Found /Users/jenkins/minikube-integration/17102-920/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0821 04:24:41.425341    3851 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.4 on docker
	I0821 04:24:41.425388    3851 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/multinode-806000/config.json ...
	I0821 04:24:41.425729    3851 start.go:365] acquiring machines lock for multinode-806000: {Name:mk9b32d9fe994be32d77812db464b2cfa7bfb400 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0821 04:24:41.425759    3851 start.go:369] acquired machines lock for "multinode-806000" in 23.708µs
	I0821 04:24:41.425768    3851 start.go:96] Skipping create...Using existing machine configuration
	I0821 04:24:41.425772    3851 fix.go:54] fixHost starting: 
	I0821 04:24:41.425882    3851 fix.go:102] recreateIfNeeded on multinode-806000: state=Stopped err=<nil>
	W0821 04:24:41.425889    3851 fix.go:128] unexpected machine state, will restart: <nil>
	I0821 04:24:41.429338    3851 out.go:177] * Restarting existing qemu2 VM for "multinode-806000" ...
	I0821 04:24:41.433354    3851 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17102-920/.minikube/machines/multinode-806000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/multinode-806000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17102-920/.minikube/machines/multinode-806000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:da:03:48:9a:56 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17102-920/.minikube/machines/multinode-806000/disk.qcow2
	I0821 04:24:41.435149    3851 main.go:141] libmachine: STDOUT: 
	I0821 04:24:41.435166    3851 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0821 04:24:41.435197    3851 fix.go:56] fixHost completed within 9.427792ms
	I0821 04:24:41.435202    3851 start.go:83] releasing machines lock for "multinode-806000", held for 9.444208ms
	W0821 04:24:41.435208    3851 start.go:672] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0821 04:24:41.435242    3851 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0821 04:24:41.435252    3851 start.go:687] Will try again in 5 seconds ...
	I0821 04:24:46.435396    3851 start.go:365] acquiring machines lock for multinode-806000: {Name:mk9b32d9fe994be32d77812db464b2cfa7bfb400 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0821 04:24:46.435766    3851 start.go:369] acquired machines lock for "multinode-806000" in 258.75µs
	I0821 04:24:46.435898    3851 start.go:96] Skipping create...Using existing machine configuration
	I0821 04:24:46.435917    3851 fix.go:54] fixHost starting: 
	I0821 04:24:46.436568    3851 fix.go:102] recreateIfNeeded on multinode-806000: state=Stopped err=<nil>
	W0821 04:24:46.436594    3851 fix.go:128] unexpected machine state, will restart: <nil>
	I0821 04:24:46.443858    3851 out.go:177] * Restarting existing qemu2 VM for "multinode-806000" ...
	I0821 04:24:46.447124    3851 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17102-920/.minikube/machines/multinode-806000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/multinode-806000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17102-920/.minikube/machines/multinode-806000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:da:03:48:9a:56 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17102-920/.minikube/machines/multinode-806000/disk.qcow2
	I0821 04:24:46.455132    3851 main.go:141] libmachine: STDOUT: 
	I0821 04:24:46.455194    3851 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0821 04:24:46.455283    3851 fix.go:56] fixHost completed within 19.375542ms
	I0821 04:24:46.455309    3851 start.go:83] releasing machines lock for "multinode-806000", held for 19.524667ms
	W0821 04:24:46.455504    3851 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-806000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-806000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0821 04:24:46.462877    3851 out.go:177] 
	W0821 04:24:46.467128    3851 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0821 04:24:46.467153    3851 out.go:239] * 
	* 
	W0821 04:24:46.469656    3851 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0821 04:24:46.477872    3851 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:297: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-806000" : exit status 80
multinode_test.go:300: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-806000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-806000 -n multinode-806000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-806000 -n multinode-806000: exit status 7 (32.533666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-806000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (5.36s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-806000 node delete m03
multinode_test.go:394: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-806000 node delete m03: exit status 89 (37.719917ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p multinode-806000"

                                                
                                                
-- /stdout --
multinode_test.go:396: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-806000 node delete m03": exit status 89
multinode_test.go:400: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-806000 status --alsologtostderr
multinode_test.go:400: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-806000 status --alsologtostderr: exit status 7 (28.115125ms)

                                                
                                                
-- stdout --
	multinode-806000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0821 04:24:46.655865    3865 out.go:296] Setting OutFile to fd 1 ...
	I0821 04:24:46.656002    3865 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 04:24:46.656005    3865 out.go:309] Setting ErrFile to fd 2...
	I0821 04:24:46.656007    3865 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 04:24:46.656123    3865 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17102-920/.minikube/bin
	I0821 04:24:46.656238    3865 out.go:303] Setting JSON to false
	I0821 04:24:46.656249    3865 mustload.go:65] Loading cluster: multinode-806000
	I0821 04:24:46.656321    3865 notify.go:220] Checking for updates...
	I0821 04:24:46.656425    3865 config.go:182] Loaded profile config "multinode-806000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0821 04:24:46.656431    3865 status.go:255] checking status of multinode-806000 ...
	I0821 04:24:46.656621    3865 status.go:330] multinode-806000 host status = "Stopped" (err=<nil>)
	I0821 04:24:46.656625    3865 status.go:343] host is not running, skipping remaining checks
	I0821 04:24:46.656627    3865 status.go:257] multinode-806000 status: &{Name:multinode-806000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:402: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-806000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-806000 -n multinode-806000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-806000 -n multinode-806000: exit status 7 (28.599583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-806000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (0.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-806000 stop
multinode_test.go:320: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-806000 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-806000 status: exit status 7 (28.447542ms)

                                                
                                                
-- stdout --
	multinode-806000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-806000 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-806000 status --alsologtostderr: exit status 7 (28.318167ms)

                                                
                                                
-- stdout --
	multinode-806000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0821 04:24:46.798773    3873 out.go:296] Setting OutFile to fd 1 ...
	I0821 04:24:46.798911    3873 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 04:24:46.798914    3873 out.go:309] Setting ErrFile to fd 2...
	I0821 04:24:46.798920    3873 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 04:24:46.799043    3873 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17102-920/.minikube/bin
	I0821 04:24:46.799154    3873 out.go:303] Setting JSON to false
	I0821 04:24:46.799165    3873 mustload.go:65] Loading cluster: multinode-806000
	I0821 04:24:46.799218    3873 notify.go:220] Checking for updates...
	I0821 04:24:46.799345    3873 config.go:182] Loaded profile config "multinode-806000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0821 04:24:46.799350    3873 status.go:255] checking status of multinode-806000 ...
	I0821 04:24:46.799537    3873 status.go:330] multinode-806000 host status = "Stopped" (err=<nil>)
	I0821 04:24:46.799540    3873 status.go:343] host is not running, skipping remaining checks
	I0821 04:24:46.799543    3873 status.go:257] multinode-806000 status: &{Name:multinode-806000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:333: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-806000 status --alsologtostderr": multinode-806000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:337: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-806000 status --alsologtostderr": multinode-806000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-806000 -n multinode-806000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-806000 -n multinode-806000: exit status 7 (28.265459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-806000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (0.14s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:354: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-806000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
E0821 04:24:49.424067    1362 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/ingress-addon-legacy-717000/client.crt: no such file or directory
multinode_test.go:354: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-806000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.177253083s)

                                                
                                                
-- stdout --
	* [multinode-806000] minikube v1.31.2 on Darwin 13.5 (arm64)
	  - MINIKUBE_LOCATION=17102
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17102-920/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17102-920/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node multinode-806000 in cluster multinode-806000
	* Restarting existing qemu2 VM for "multinode-806000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-806000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0821 04:24:46.855067    3877 out.go:296] Setting OutFile to fd 1 ...
	I0821 04:24:46.855193    3877 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 04:24:46.855196    3877 out.go:309] Setting ErrFile to fd 2...
	I0821 04:24:46.855199    3877 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 04:24:46.855301    3877 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17102-920/.minikube/bin
	I0821 04:24:46.856244    3877 out.go:303] Setting JSON to false
	I0821 04:24:46.871331    3877 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3260,"bootTime":1692613826,"procs":412,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0821 04:24:46.871416    3877 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0821 04:24:46.876395    3877 out.go:177] * [multinode-806000] minikube v1.31.2 on Darwin 13.5 (arm64)
	I0821 04:24:46.883450    3877 out.go:177]   - MINIKUBE_LOCATION=17102
	I0821 04:24:46.887398    3877 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17102-920/kubeconfig
	I0821 04:24:46.883509    3877 notify.go:220] Checking for updates...
	I0821 04:24:46.894348    3877 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0821 04:24:46.898350    3877 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0821 04:24:46.899650    3877 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17102-920/.minikube
	I0821 04:24:46.902437    3877 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0821 04:24:46.905672    3877 config.go:182] Loaded profile config "multinode-806000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0821 04:24:46.905904    3877 driver.go:373] Setting default libvirt URI to qemu:///system
	I0821 04:24:46.910245    3877 out.go:177] * Using the qemu2 driver based on existing profile
	I0821 04:24:46.917364    3877 start.go:298] selected driver: qemu2
	I0821 04:24:46.917371    3877 start.go:902] validating driver "qemu2" against &{Name:multinode-806000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:
multinode-806000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0821 04:24:46.917441    3877 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0821 04:24:46.919337    3877 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0821 04:24:46.919360    3877 cni.go:84] Creating CNI manager for ""
	I0821 04:24:46.919364    3877 cni.go:136] 1 nodes found, recommending kindnet
	I0821 04:24:46.919373    3877 start_flags.go:319] config:
	{Name:multinode-806000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:multinode-806000 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0821 04:24:46.923197    3877 iso.go:125] acquiring lock: {Name:mk813ea611542195bb0511881888be3fabc72ff6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 04:24:46.930338    3877 out.go:177] * Starting control plane node multinode-806000 in cluster multinode-806000
	I0821 04:24:46.934370    3877 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime docker
	I0821 04:24:46.934392    3877 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17102-920/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4
	I0821 04:24:46.934406    3877 cache.go:57] Caching tarball of preloaded images
	I0821 04:24:46.934454    3877 preload.go:174] Found /Users/jenkins/minikube-integration/17102-920/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0821 04:24:46.934459    3877 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.4 on docker
	I0821 04:24:46.934515    3877 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/multinode-806000/config.json ...
	I0821 04:24:46.934870    3877 start.go:365] acquiring machines lock for multinode-806000: {Name:mk9b32d9fe994be32d77812db464b2cfa7bfb400 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0821 04:24:46.934896    3877 start.go:369] acquired machines lock for "multinode-806000" in 19.333µs
	I0821 04:24:46.934905    3877 start.go:96] Skipping create...Using existing machine configuration
	I0821 04:24:46.934910    3877 fix.go:54] fixHost starting: 
	I0821 04:24:46.935019    3877 fix.go:102] recreateIfNeeded on multinode-806000: state=Stopped err=<nil>
	W0821 04:24:46.935027    3877 fix.go:128] unexpected machine state, will restart: <nil>
	I0821 04:24:46.938420    3877 out.go:177] * Restarting existing qemu2 VM for "multinode-806000" ...
	I0821 04:24:46.946424    3877 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17102-920/.minikube/machines/multinode-806000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/multinode-806000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17102-920/.minikube/machines/multinode-806000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:da:03:48:9a:56 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17102-920/.minikube/machines/multinode-806000/disk.qcow2
	I0821 04:24:46.948482    3877 main.go:141] libmachine: STDOUT: 
	I0821 04:24:46.948499    3877 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0821 04:24:46.948530    3877 fix.go:56] fixHost completed within 13.623083ms
	I0821 04:24:46.948536    3877 start.go:83] releasing machines lock for "multinode-806000", held for 13.640209ms
	W0821 04:24:46.948545    3877 start.go:672] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0821 04:24:46.948580    3877 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0821 04:24:46.948586    3877 start.go:687] Will try again in 5 seconds ...
	I0821 04:24:51.949390    3877 start.go:365] acquiring machines lock for multinode-806000: {Name:mk9b32d9fe994be32d77812db464b2cfa7bfb400 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0821 04:24:51.950090    3877 start.go:369] acquired machines lock for "multinode-806000" in 597.625µs
	I0821 04:24:51.950258    3877 start.go:96] Skipping create...Using existing machine configuration
	I0821 04:24:51.950278    3877 fix.go:54] fixHost starting: 
	I0821 04:24:51.950990    3877 fix.go:102] recreateIfNeeded on multinode-806000: state=Stopped err=<nil>
	W0821 04:24:51.951015    3877 fix.go:128] unexpected machine state, will restart: <nil>
	I0821 04:24:51.955531    3877 out.go:177] * Restarting existing qemu2 VM for "multinode-806000" ...
	I0821 04:24:51.962652    3877 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17102-920/.minikube/machines/multinode-806000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/multinode-806000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17102-920/.minikube/machines/multinode-806000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:da:03:48:9a:56 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17102-920/.minikube/machines/multinode-806000/disk.qcow2
	I0821 04:24:51.971613    3877 main.go:141] libmachine: STDOUT: 
	I0821 04:24:51.971672    3877 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0821 04:24:51.971759    3877 fix.go:56] fixHost completed within 21.484959ms
	I0821 04:24:51.971780    3877 start.go:83] releasing machines lock for "multinode-806000", held for 21.671958ms
	W0821 04:24:51.972041    3877 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-806000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-806000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0821 04:24:51.978358    3877 out.go:177] 
	W0821 04:24:51.981542    3877 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0821 04:24:51.981566    3877 out.go:239] * 
	* 
	W0821 04:24:51.983933    3877 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0821 04:24:51.992206    3877 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:356: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-806000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-806000 -n multinode-806000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-806000 -n multinode-806000: exit status 7 (69.06375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-806000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-806000
multinode_test.go:452: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-806000-m01 --driver=qemu2 
E0821 04:24:59.664530    1362 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/ingress-addon-legacy-717000/client.crt: no such file or directory
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-806000-m01 --driver=qemu2 : exit status 80 (9.944196625s)

                                                
                                                
-- stdout --
	* [multinode-806000-m01] minikube v1.31.2 on Darwin 13.5 (arm64)
	  - MINIKUBE_LOCATION=17102
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17102-920/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17102-920/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node multinode-806000-m01 in cluster multinode-806000-m01
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-806000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-806000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-806000-m02 --driver=qemu2 
multinode_test.go:460: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-806000-m02 --driver=qemu2 : exit status 80 (9.876188042s)

                                                
                                                
-- stdout --
	* [multinode-806000-m02] minikube v1.31.2 on Darwin 13.5 (arm64)
	  - MINIKUBE_LOCATION=17102
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17102-920/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17102-920/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node multinode-806000-m02 in cluster multinode-806000-m02
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-806000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-806000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:462: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-806000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-806000
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-806000: exit status 89 (77.880125ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p multinode-806000"

                                                
                                                
-- /stdout --
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-806000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-806000 -n multinode-806000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-806000 -n multinode-806000: exit status 7 (31.217542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-806000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.10s)

                                                
                                    
x
+
TestPreload (9.93s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-886000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
E0821 04:25:20.144850    1362 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/ingress-addon-legacy-717000/client.crt: no such file or directory
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-886000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.767933166s)

                                                
                                                
-- stdout --
	* [test-preload-886000] minikube v1.31.2 on Darwin 13.5 (arm64)
	  - MINIKUBE_LOCATION=17102
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17102-920/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17102-920/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node test-preload-886000 in cluster test-preload-886000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-886000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0821 04:25:12.321069    3932 out.go:296] Setting OutFile to fd 1 ...
	I0821 04:25:12.321193    3932 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 04:25:12.321196    3932 out.go:309] Setting ErrFile to fd 2...
	I0821 04:25:12.321198    3932 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 04:25:12.321300    3932 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17102-920/.minikube/bin
	I0821 04:25:12.322304    3932 out.go:303] Setting JSON to false
	I0821 04:25:12.337450    3932 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3286,"bootTime":1692613826,"procs":410,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0821 04:25:12.337525    3932 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0821 04:25:12.342929    3932 out.go:177] * [test-preload-886000] minikube v1.31.2 on Darwin 13.5 (arm64)
	I0821 04:25:12.350798    3932 out.go:177]   - MINIKUBE_LOCATION=17102
	I0821 04:25:12.350852    3932 notify.go:220] Checking for updates...
	I0821 04:25:12.354915    3932 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17102-920/kubeconfig
	I0821 04:25:12.358919    3932 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0821 04:25:12.361844    3932 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0821 04:25:12.365885    3932 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17102-920/.minikube
	I0821 04:25:12.368954    3932 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0821 04:25:12.372253    3932 config.go:182] Loaded profile config "multinode-806000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0821 04:25:12.372298    3932 driver.go:373] Setting default libvirt URI to qemu:///system
	I0821 04:25:12.375907    3932 out.go:177] * Using the qemu2 driver based on user configuration
	I0821 04:25:12.381906    3932 start.go:298] selected driver: qemu2
	I0821 04:25:12.381912    3932 start.go:902] validating driver "qemu2" against <nil>
	I0821 04:25:12.381918    3932 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0821 04:25:12.383846    3932 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0821 04:25:12.387944    3932 out.go:177] * Automatically selected the socket_vmnet network
	I0821 04:25:12.391041    3932 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0821 04:25:12.391071    3932 cni.go:84] Creating CNI manager for ""
	I0821 04:25:12.391079    3932 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0821 04:25:12.391084    3932 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0821 04:25:12.391089    3932 start_flags.go:319] config:
	{Name:test-preload-886000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-886000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Net
workPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0821 04:25:12.395358    3932 iso.go:125] acquiring lock: {Name:mk813ea611542195bb0511881888be3fabc72ff6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 04:25:12.403938    3932 out.go:177] * Starting control plane node test-preload-886000 in cluster test-preload-886000
	I0821 04:25:12.407786    3932 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0821 04:25:12.407883    3932 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/test-preload-886000/config.json ...
	I0821 04:25:12.407913    3932 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/test-preload-886000/config.json: {Name:mkc781de1baa8f3244c711e1cef4a017780e5044 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 04:25:12.407897    3932 cache.go:107] acquiring lock: {Name:mk2c32575c8f9aa36e98dd49f399a8549ea6540f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 04:25:12.407896    3932 cache.go:107] acquiring lock: {Name:mkb0ecb25330e86fc045affefacac515111c53df Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 04:25:12.407924    3932 cache.go:107] acquiring lock: {Name:mk6f5ce9f545da76d080922c016be42be87e0821 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 04:25:12.408146    3932 cache.go:107] acquiring lock: {Name:mkfab7ce76e1c63eff1785c31e343d100e5eb1c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 04:25:12.408159    3932 cache.go:107] acquiring lock: {Name:mk821e17142440f61bac668bb58dd6c89942cee3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 04:25:12.408150    3932 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0821 04:25:12.408178    3932 cache.go:107] acquiring lock: {Name:mk981cf1cf201b3a0ae877fa954bd3c8a560c532 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 04:25:12.408169    3932 start.go:365] acquiring machines lock for test-preload-886000: {Name:mk9b32d9fe994be32d77812db464b2cfa7bfb400 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0821 04:25:12.408147    3932 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0821 04:25:12.408218    3932 start.go:369] acquired machines lock for "test-preload-886000" in 30.125µs
	I0821 04:25:12.408219    3932 cache.go:107] acquiring lock: {Name:mk4798a719cab3d5b7c202b52efdc1b888ea20a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 04:25:12.408233    3932 start.go:93] Provisioning new machine with config: &{Name:test-preload-886000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 Cluste
rName:test-preload-886000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0821 04:25:12.408262    3932 start.go:125] createHost starting for "" (driver="qemu2")
	I0821 04:25:12.408288    3932 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0821 04:25:12.411962    3932 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0821 04:25:12.408325    3932 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0821 04:25:12.408187    3932 cache.go:107] acquiring lock: {Name:mkd8a4bf28a31863bcf3849ade0d13f970624f76 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 04:25:12.408381    3932 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0821 04:25:12.408389    3932 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0821 04:25:12.408713    3932 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0821 04:25:12.412595    3932 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0821 04:25:12.419395    3932 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0821 04:25:12.420180    3932 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0821 04:25:12.420454    3932 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0821 04:25:12.420513    3932 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0821 04:25:12.420563    3932 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0821 04:25:12.420563    3932 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0821 04:25:12.422886    3932 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0821 04:25:12.422933    3932 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0821 04:25:12.428929    3932 start.go:159] libmachine.API.Create for "test-preload-886000" (driver="qemu2")
	I0821 04:25:12.428948    3932 client.go:168] LocalClient.Create starting
	I0821 04:25:12.429008    3932 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca.pem
	I0821 04:25:12.429043    3932 main.go:141] libmachine: Decoding PEM data...
	I0821 04:25:12.429053    3932 main.go:141] libmachine: Parsing certificate...
	I0821 04:25:12.429089    3932 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17102-920/.minikube/certs/cert.pem
	I0821 04:25:12.429107    3932 main.go:141] libmachine: Decoding PEM data...
	I0821 04:25:12.429115    3932 main.go:141] libmachine: Parsing certificate...
	I0821 04:25:12.429412    3932 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17102-920/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17102-920/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0821 04:25:12.551810    3932 main.go:141] libmachine: Creating SSH key...
	I0821 04:25:12.642026    3932 main.go:141] libmachine: Creating Disk image...
	I0821 04:25:12.642041    3932 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0821 04:25:12.642214    3932 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17102-920/.minikube/machines/test-preload-886000/disk.qcow2.raw /Users/jenkins/minikube-integration/17102-920/.minikube/machines/test-preload-886000/disk.qcow2
	I0821 04:25:12.650819    3932 main.go:141] libmachine: STDOUT: 
	I0821 04:25:12.650840    3932 main.go:141] libmachine: STDERR: 
	I0821 04:25:12.650906    3932 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17102-920/.minikube/machines/test-preload-886000/disk.qcow2 +20000M
	I0821 04:25:12.658612    3932 main.go:141] libmachine: STDOUT: Image resized.
	
	I0821 04:25:12.658626    3932 main.go:141] libmachine: STDERR: 
	I0821 04:25:12.658645    3932 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17102-920/.minikube/machines/test-preload-886000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17102-920/.minikube/machines/test-preload-886000/disk.qcow2
	I0821 04:25:12.658652    3932 main.go:141] libmachine: Starting QEMU VM...
	I0821 04:25:12.658690    3932 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17102-920/.minikube/machines/test-preload-886000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/test-preload-886000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17102-920/.minikube/machines/test-preload-886000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:10:f0:00:e3:eb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17102-920/.minikube/machines/test-preload-886000/disk.qcow2
	I0821 04:25:12.660376    3932 main.go:141] libmachine: STDOUT: 
	I0821 04:25:12.660392    3932 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0821 04:25:12.660414    3932 client.go:171] LocalClient.Create took 231.476291ms
	I0821 04:25:13.056094    3932 cache.go:162] opening:  /Users/jenkins/minikube-integration/17102-920/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	W0821 04:25:13.387135    3932 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0821 04:25:13.387180    3932 cache.go:162] opening:  /Users/jenkins/minikube-integration/17102-920/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	W0821 04:25:13.548921    3932 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0821 04:25:13.548948    3932 cache.go:162] opening:  /Users/jenkins/minikube-integration/17102-920/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0821 04:25:13.575148    3932 cache.go:162] opening:  /Users/jenkins/minikube-integration/17102-920/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0821 04:25:13.779458    3932 cache.go:162] opening:  /Users/jenkins/minikube-integration/17102-920/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I0821 04:25:13.800410    3932 cache.go:157] /Users/jenkins/minikube-integration/17102-920/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0821 04:25:13.800427    3932 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/17102-920/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.392637458s
	I0821 04:25:13.800437    3932 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/17102-920/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0821 04:25:13.978018    3932 cache.go:162] opening:  /Users/jenkins/minikube-integration/17102-920/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0821 04:25:14.144101    3932 cache.go:162] opening:  /Users/jenkins/minikube-integration/17102-920/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0821 04:25:14.266715    3932 cache.go:157] /Users/jenkins/minikube-integration/17102-920/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0821 04:25:14.266735    3932 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/17102-920/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 1.85875625s
	I0821 04:25:14.266744    3932 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/17102-920/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	I0821 04:25:14.388270    3932 cache.go:162] opening:  /Users/jenkins/minikube-integration/17102-920/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0821 04:25:14.660493    3932 start.go:128] duration metric: createHost completed in 2.252380541s
	I0821 04:25:14.660532    3932 start.go:83] releasing machines lock for "test-preload-886000", held for 2.252472458s
	W0821 04:25:14.660607    3932 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0821 04:25:14.667501    3932 out.go:177] * Deleting "test-preload-886000" in qemu2 ...
	W0821 04:25:14.687422    3932 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0821 04:25:14.687453    3932 start.go:687] Will try again in 5 seconds ...
	I0821 04:25:15.910079    3932 cache.go:157] /Users/jenkins/minikube-integration/17102-920/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0821 04:25:15.910143    3932 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/17102-920/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 3.502234291s
	I0821 04:25:15.910178    3932 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/17102-920/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0821 04:25:15.925420    3932 cache.go:157] /Users/jenkins/minikube-integration/17102-920/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0821 04:25:15.925462    3932 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/17102-920/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 3.517591583s
	I0821 04:25:15.925493    3932 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/17102-920/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0821 04:25:17.703320    3932 cache.go:157] /Users/jenkins/minikube-integration/17102-920/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0821 04:25:17.703389    3932 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/17102-920/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 5.295864042s
	I0821 04:25:17.703425    3932 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/17102-920/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0821 04:25:18.028808    3932 cache.go:157] /Users/jenkins/minikube-integration/17102-920/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0821 04:25:18.028875    3932 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/17102-920/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 5.6213485s
	I0821 04:25:18.028911    3932 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/17102-920/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0821 04:25:18.190151    3932 cache.go:157] /Users/jenkins/minikube-integration/17102-920/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0821 04:25:18.190193    3932 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/17102-920/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 5.782409875s
	I0821 04:25:18.190217    3932 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/17102-920/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0821 04:25:19.687264    3932 start.go:365] acquiring machines lock for test-preload-886000: {Name:mk9b32d9fe994be32d77812db464b2cfa7bfb400 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0821 04:25:19.687741    3932 start.go:369] acquired machines lock for "test-preload-886000" in 403.625µs
	I0821 04:25:19.687851    3932 start.go:93] Provisioning new machine with config: &{Name:test-preload-886000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 Cluste
rName:test-preload-886000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0821 04:25:19.688124    3932 start.go:125] createHost starting for "" (driver="qemu2")
	I0821 04:25:19.692627    3932 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0821 04:25:19.740210    3932 start.go:159] libmachine.API.Create for "test-preload-886000" (driver="qemu2")
	I0821 04:25:19.740252    3932 client.go:168] LocalClient.Create starting
	I0821 04:25:19.740427    3932 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca.pem
	I0821 04:25:19.740507    3932 main.go:141] libmachine: Decoding PEM data...
	I0821 04:25:19.740531    3932 main.go:141] libmachine: Parsing certificate...
	I0821 04:25:19.740601    3932 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17102-920/.minikube/certs/cert.pem
	I0821 04:25:19.740645    3932 main.go:141] libmachine: Decoding PEM data...
	I0821 04:25:19.740661    3932 main.go:141] libmachine: Parsing certificate...
	I0821 04:25:19.741206    3932 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17102-920/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17102-920/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0821 04:25:19.872010    3932 main.go:141] libmachine: Creating SSH key...
	I0821 04:25:20.008361    3932 main.go:141] libmachine: Creating Disk image...
	I0821 04:25:20.008369    3932 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0821 04:25:20.008505    3932 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17102-920/.minikube/machines/test-preload-886000/disk.qcow2.raw /Users/jenkins/minikube-integration/17102-920/.minikube/machines/test-preload-886000/disk.qcow2
	I0821 04:25:20.017317    3932 main.go:141] libmachine: STDOUT: 
	I0821 04:25:20.017328    3932 main.go:141] libmachine: STDERR: 
	I0821 04:25:20.017384    3932 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17102-920/.minikube/machines/test-preload-886000/disk.qcow2 +20000M
	I0821 04:25:20.024670    3932 main.go:141] libmachine: STDOUT: Image resized.
	
	I0821 04:25:20.024697    3932 main.go:141] libmachine: STDERR: 
	I0821 04:25:20.024710    3932 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17102-920/.minikube/machines/test-preload-886000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17102-920/.minikube/machines/test-preload-886000/disk.qcow2
	I0821 04:25:20.024716    3932 main.go:141] libmachine: Starting QEMU VM...
	I0821 04:25:20.024761    3932 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17102-920/.minikube/machines/test-preload-886000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/test-preload-886000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17102-920/.minikube/machines/test-preload-886000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:29:4f:c5:26:87 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17102-920/.minikube/machines/test-preload-886000/disk.qcow2
	I0821 04:25:20.026393    3932 main.go:141] libmachine: STDOUT: 
	I0821 04:25:20.026430    3932 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0821 04:25:20.026441    3932 client.go:171] LocalClient.Create took 286.200459ms
	I0821 04:25:21.847206    3932 cache.go:157] /Users/jenkins/minikube-integration/17102-920/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 exists
	I0821 04:25:21.847284    3932 cache.go:96] cache image "registry.k8s.io/etcd:3.5.3-0" -> "/Users/jenkins/minikube-integration/17102-920/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0" took 9.439708792s
	I0821 04:25:21.847337    3932 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.3-0 -> /Users/jenkins/minikube-integration/17102-920/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 succeeded
	I0821 04:25:21.847381    3932 cache.go:87] Successfully saved all images to host disk.
	I0821 04:25:22.027227    3932 start.go:128] duration metric: createHost completed in 2.339183791s
	I0821 04:25:22.027268    3932 start.go:83] releasing machines lock for "test-preload-886000", held for 2.33963025s
	W0821 04:25:22.027435    3932 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-886000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-886000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0821 04:25:22.033892    3932 out.go:177] 
	W0821 04:25:22.037960    3932 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0821 04:25:22.037977    3932 out.go:239] * 
	* 
	W0821 04:25:22.039354    3932 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0821 04:25:22.048905    3932 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-886000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:522: *** TestPreload FAILED at 2023-08-21 04:25:22.06584 -0700 PDT m=+3127.076900709
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-886000 -n test-preload-886000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-886000 -n test-preload-886000: exit status 7 (64.835ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-886000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-886000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-886000
--- FAIL: TestPreload (9.93s)

                                                
                                    
x
+
TestScheduledStopUnix (9.97s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-662000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-662000 --memory=2048 --driver=qemu2 : exit status 80 (9.790532167s)

                                                
                                                
-- stdout --
	* [scheduled-stop-662000] minikube v1.31.2 on Darwin 13.5 (arm64)
	  - MINIKUBE_LOCATION=17102
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17102-920/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17102-920/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node scheduled-stop-662000 in cluster scheduled-stop-662000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-662000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-662000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-662000] minikube v1.31.2 on Darwin 13.5 (arm64)
	  - MINIKUBE_LOCATION=17102
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17102-920/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17102-920/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node scheduled-stop-662000 in cluster scheduled-stop-662000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-662000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-662000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:522: *** TestScheduledStopUnix FAILED at 2023-08-21 04:25:32.017472 -0700 PDT m=+3137.028958293
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-662000 -n scheduled-stop-662000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-662000 -n scheduled-stop-662000: exit status 7 (69.445459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-662000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-662000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-662000
--- FAIL: TestScheduledStopUnix (9.97s)

                                                
                                    
x
+
TestSkaffold (12.11s)

                                                
                                                
=== RUN   TestSkaffold
E0821 04:25:32.520945    1362 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/client.crt: no such file or directory
skaffold_test.go:59: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe3568970780 version
skaffold_test.go:63: skaffold version: v2.6.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-745000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-745000 --memory=2600 --driver=qemu2 : exit status 80 (9.699547667s)

                                                
                                                
-- stdout --
	* [skaffold-745000] minikube v1.31.2 on Darwin 13.5 (arm64)
	  - MINIKUBE_LOCATION=17102
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17102-920/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17102-920/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node skaffold-745000 in cluster skaffold-745000
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-745000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-745000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-745000] minikube v1.31.2 on Darwin 13.5 (arm64)
	  - MINIKUBE_LOCATION=17102
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17102-920/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17102-920/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node skaffold-745000 in cluster skaffold-745000
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-745000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-745000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:522: *** TestSkaffold FAILED at 2023-08-21 04:25:44.134626 -0700 PDT m=+3149.146484834
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-745000 -n skaffold-745000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-745000 -n skaffold-745000: exit status 7 (61.761125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-745000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-745000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-745000
--- FAIL: TestSkaffold (12.11s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (158.47s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
E0821 04:26:46.069783    1362 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/functional-818000/client.crt: no such file or directory
E0821 04:27:13.780896    1362 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/functional-818000/client.crt: no such file or directory
E0821 04:27:23.027050    1362 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/ingress-addon-legacy-717000/client.crt: no such file or directory
version_upgrade_test.go:106: v1.6.2 release installation failed: bad response code: 404
panic.go:522: *** TestRunningBinaryUpgrade FAILED at 2023-08-21 04:29:02.301347 -0700 PDT m=+3347.317093709
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-922000 -n running-upgrade-922000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-922000 -n running-upgrade-922000: exit status 85 (89.051667ms)

                                                
                                                
-- stdout --
	* Profile "running-upgrade-922000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p running-upgrade-922000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "running-upgrade-922000" host is not running, skipping log retrieval (state="* Profile \"running-upgrade-922000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p running-upgrade-922000\"")
helpers_test.go:175: Cleaning up "running-upgrade-922000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-922000
--- FAIL: TestRunningBinaryUpgrade (158.47s)

                                                
                                    
x
+
TestKubernetesUpgrade (15.39s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:234: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-171000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:234: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-171000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.884990417s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-171000] minikube v1.31.2 on Darwin 13.5 (arm64)
	  - MINIKUBE_LOCATION=17102
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17102-920/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17102-920/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node kubernetes-upgrade-171000 in cluster kubernetes-upgrade-171000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-171000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0821 04:29:02.702424    4422 out.go:296] Setting OutFile to fd 1 ...
	I0821 04:29:02.702551    4422 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 04:29:02.702557    4422 out.go:309] Setting ErrFile to fd 2...
	I0821 04:29:02.702559    4422 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 04:29:02.702668    4422 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17102-920/.minikube/bin
	I0821 04:29:02.703644    4422 out.go:303] Setting JSON to false
	I0821 04:29:02.718844    4422 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3516,"bootTime":1692613826,"procs":410,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0821 04:29:02.718907    4422 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0821 04:29:02.723415    4422 out.go:177] * [kubernetes-upgrade-171000] minikube v1.31.2 on Darwin 13.5 (arm64)
	I0821 04:29:02.730442    4422 out.go:177]   - MINIKUBE_LOCATION=17102
	I0821 04:29:02.730496    4422 notify.go:220] Checking for updates...
	I0821 04:29:02.733410    4422 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17102-920/kubeconfig
	I0821 04:29:02.737363    4422 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0821 04:29:02.740410    4422 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0821 04:29:02.744413    4422 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17102-920/.minikube
	I0821 04:29:02.747375    4422 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0821 04:29:02.750716    4422 config.go:182] Loaded profile config "cert-expiration-150000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0821 04:29:02.750779    4422 config.go:182] Loaded profile config "multinode-806000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0821 04:29:02.750823    4422 driver.go:373] Setting default libvirt URI to qemu:///system
	I0821 04:29:02.758402    4422 out.go:177] * Using the qemu2 driver based on user configuration
	I0821 04:29:02.765367    4422 start.go:298] selected driver: qemu2
	I0821 04:29:02.765374    4422 start.go:902] validating driver "qemu2" against <nil>
	I0821 04:29:02.765381    4422 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0821 04:29:02.767374    4422 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0821 04:29:02.770377    4422 out.go:177] * Automatically selected the socket_vmnet network
	I0821 04:29:02.774434    4422 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0821 04:29:02.774460    4422 cni.go:84] Creating CNI manager for ""
	I0821 04:29:02.774467    4422 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0821 04:29:02.774471    4422 start_flags.go:319] config:
	{Name:kubernetes-upgrade-171000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-171000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0821 04:29:02.778669    4422 iso.go:125] acquiring lock: {Name:mk813ea611542195bb0511881888be3fabc72ff6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 04:29:02.782490    4422 out.go:177] * Starting control plane node kubernetes-upgrade-171000 in cluster kubernetes-upgrade-171000
	I0821 04:29:02.790385    4422 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0821 04:29:02.790423    4422 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17102-920/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0821 04:29:02.790448    4422 cache.go:57] Caching tarball of preloaded images
	I0821 04:29:02.790536    4422 preload.go:174] Found /Users/jenkins/minikube-integration/17102-920/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0821 04:29:02.790542    4422 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0821 04:29:02.790621    4422 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/kubernetes-upgrade-171000/config.json ...
	I0821 04:29:02.790639    4422 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/kubernetes-upgrade-171000/config.json: {Name:mkd951879a5f163ab8c267a0811d0876c8a211a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 04:29:02.790868    4422 start.go:365] acquiring machines lock for kubernetes-upgrade-171000: {Name:mk9b32d9fe994be32d77812db464b2cfa7bfb400 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0821 04:29:02.790902    4422 start.go:369] acquired machines lock for "kubernetes-upgrade-171000" in 27.167µs
	I0821 04:29:02.790915    4422 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-171000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0
ClusterName:kubernetes-upgrade-171000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0821 04:29:02.790959    4422 start.go:125] createHost starting for "" (driver="qemu2")
	I0821 04:29:02.799368    4422 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0821 04:29:02.816007    4422 start.go:159] libmachine.API.Create for "kubernetes-upgrade-171000" (driver="qemu2")
	I0821 04:29:02.816029    4422 client.go:168] LocalClient.Create starting
	I0821 04:29:02.816089    4422 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca.pem
	I0821 04:29:02.816114    4422 main.go:141] libmachine: Decoding PEM data...
	I0821 04:29:02.816125    4422 main.go:141] libmachine: Parsing certificate...
	I0821 04:29:02.816162    4422 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17102-920/.minikube/certs/cert.pem
	I0821 04:29:02.816181    4422 main.go:141] libmachine: Decoding PEM data...
	I0821 04:29:02.816191    4422 main.go:141] libmachine: Parsing certificate...
	I0821 04:29:02.816541    4422 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17102-920/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17102-920/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0821 04:29:02.937915    4422 main.go:141] libmachine: Creating SSH key...
	I0821 04:29:03.160772    4422 main.go:141] libmachine: Creating Disk image...
	I0821 04:29:03.160781    4422 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0821 04:29:03.161011    4422 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17102-920/.minikube/machines/kubernetes-upgrade-171000/disk.qcow2.raw /Users/jenkins/minikube-integration/17102-920/.minikube/machines/kubernetes-upgrade-171000/disk.qcow2
	I0821 04:29:03.170040    4422 main.go:141] libmachine: STDOUT: 
	I0821 04:29:03.170054    4422 main.go:141] libmachine: STDERR: 
	I0821 04:29:03.170105    4422 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17102-920/.minikube/machines/kubernetes-upgrade-171000/disk.qcow2 +20000M
	I0821 04:29:03.177339    4422 main.go:141] libmachine: STDOUT: Image resized.
	
	I0821 04:29:03.177351    4422 main.go:141] libmachine: STDERR: 
	I0821 04:29:03.177366    4422 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17102-920/.minikube/machines/kubernetes-upgrade-171000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17102-920/.minikube/machines/kubernetes-upgrade-171000/disk.qcow2
	I0821 04:29:03.177372    4422 main.go:141] libmachine: Starting QEMU VM...
	I0821 04:29:03.177407    4422 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17102-920/.minikube/machines/kubernetes-upgrade-171000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/kubernetes-upgrade-171000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17102-920/.minikube/machines/kubernetes-upgrade-171000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:9c:4b:a5:50:50 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17102-920/.minikube/machines/kubernetes-upgrade-171000/disk.qcow2
	I0821 04:29:03.178985    4422 main.go:141] libmachine: STDOUT: 
	I0821 04:29:03.179006    4422 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0821 04:29:03.179026    4422 client.go:171] LocalClient.Create took 362.996083ms
	I0821 04:29:05.181436    4422 start.go:128] duration metric: createHost completed in 2.390506208s
	I0821 04:29:05.181478    4422 start.go:83] releasing machines lock for "kubernetes-upgrade-171000", held for 2.390612875s
	W0821 04:29:05.181537    4422 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0821 04:29:05.187930    4422 out.go:177] * Deleting "kubernetes-upgrade-171000" in qemu2 ...
	W0821 04:29:05.208941    4422 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0821 04:29:05.208976    4422 start.go:687] Will try again in 5 seconds ...
	I0821 04:29:10.209893    4422 start.go:365] acquiring machines lock for kubernetes-upgrade-171000: {Name:mk9b32d9fe994be32d77812db464b2cfa7bfb400 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0821 04:29:10.210256    4422 start.go:369] acquired machines lock for "kubernetes-upgrade-171000" in 289.75µs
	I0821 04:29:10.210393    4422 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-171000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0
ClusterName:kubernetes-upgrade-171000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0821 04:29:10.210599    4422 start.go:125] createHost starting for "" (driver="qemu2")
	I0821 04:29:10.220436    4422 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0821 04:29:10.266316    4422 start.go:159] libmachine.API.Create for "kubernetes-upgrade-171000" (driver="qemu2")
	I0821 04:29:10.266393    4422 client.go:168] LocalClient.Create starting
	I0821 04:29:10.266567    4422 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca.pem
	I0821 04:29:10.266649    4422 main.go:141] libmachine: Decoding PEM data...
	I0821 04:29:10.266672    4422 main.go:141] libmachine: Parsing certificate...
	I0821 04:29:10.266781    4422 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17102-920/.minikube/certs/cert.pem
	I0821 04:29:10.266830    4422 main.go:141] libmachine: Decoding PEM data...
	I0821 04:29:10.266845    4422 main.go:141] libmachine: Parsing certificate...
	I0821 04:29:10.267401    4422 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17102-920/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17102-920/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0821 04:29:10.400548    4422 main.go:141] libmachine: Creating SSH key...
	I0821 04:29:10.500126    4422 main.go:141] libmachine: Creating Disk image...
	I0821 04:29:10.500133    4422 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0821 04:29:10.500261    4422 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17102-920/.minikube/machines/kubernetes-upgrade-171000/disk.qcow2.raw /Users/jenkins/minikube-integration/17102-920/.minikube/machines/kubernetes-upgrade-171000/disk.qcow2
	I0821 04:29:10.508625    4422 main.go:141] libmachine: STDOUT: 
	I0821 04:29:10.508641    4422 main.go:141] libmachine: STDERR: 
	I0821 04:29:10.508694    4422 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17102-920/.minikube/machines/kubernetes-upgrade-171000/disk.qcow2 +20000M
	I0821 04:29:10.515851    4422 main.go:141] libmachine: STDOUT: Image resized.
	
	I0821 04:29:10.515867    4422 main.go:141] libmachine: STDERR: 
	I0821 04:29:10.515883    4422 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17102-920/.minikube/machines/kubernetes-upgrade-171000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17102-920/.minikube/machines/kubernetes-upgrade-171000/disk.qcow2
	I0821 04:29:10.515890    4422 main.go:141] libmachine: Starting QEMU VM...
	I0821 04:29:10.515930    4422 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17102-920/.minikube/machines/kubernetes-upgrade-171000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/kubernetes-upgrade-171000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17102-920/.minikube/machines/kubernetes-upgrade-171000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:8a:9f:4a:57:e8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17102-920/.minikube/machines/kubernetes-upgrade-171000/disk.qcow2
	I0821 04:29:10.517391    4422 main.go:141] libmachine: STDOUT: 
	I0821 04:29:10.517405    4422 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0821 04:29:10.517417    4422 client.go:171] LocalClient.Create took 251.019791ms
	I0821 04:29:12.519563    4422 start.go:128] duration metric: createHost completed in 2.308985833s
	I0821 04:29:12.519612    4422 start.go:83] releasing machines lock for "kubernetes-upgrade-171000", held for 2.309371792s
	W0821 04:29:12.519995    4422 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-171000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-171000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0821 04:29:12.531706    4422 out.go:177] 
	W0821 04:29:12.535714    4422 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0821 04:29:12.535736    4422 out.go:239] * 
	* 
	W0821 04:29:12.538605    4422 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0821 04:29:12.547532    4422 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:236: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-171000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-171000
version_upgrade_test.go:244: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-171000 status --format={{.Host}}
version_upgrade_test.go:244: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-171000 status --format={{.Host}}: exit status 7 (37.014875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:246: status error: exit status 7 (may be ok)
version_upgrade_test.go:255: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-171000 --memory=2200 --kubernetes-version=v1.28.0-rc.1 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:255: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-171000 --memory=2200 --kubernetes-version=v1.28.0-rc.1 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.177652708s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-171000] minikube v1.31.2 on Darwin 13.5 (arm64)
	  - MINIKUBE_LOCATION=17102
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17102-920/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17102-920/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node kubernetes-upgrade-171000 in cluster kubernetes-upgrade-171000
	* Restarting existing qemu2 VM for "kubernetes-upgrade-171000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-171000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0821 04:29:12.722216    4441 out.go:296] Setting OutFile to fd 1 ...
	I0821 04:29:12.722328    4441 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 04:29:12.722330    4441 out.go:309] Setting ErrFile to fd 2...
	I0821 04:29:12.722332    4441 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 04:29:12.722445    4441 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17102-920/.minikube/bin
	I0821 04:29:12.723383    4441 out.go:303] Setting JSON to false
	I0821 04:29:12.738418    4441 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3526,"bootTime":1692613826,"procs":411,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0821 04:29:12.738482    4441 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0821 04:29:12.743342    4441 out.go:177] * [kubernetes-upgrade-171000] minikube v1.31.2 on Darwin 13.5 (arm64)
	I0821 04:29:12.751332    4441 out.go:177]   - MINIKUBE_LOCATION=17102
	I0821 04:29:12.755395    4441 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17102-920/kubeconfig
	I0821 04:29:12.751358    4441 notify.go:220] Checking for updates...
	I0821 04:29:12.762309    4441 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0821 04:29:12.766353    4441 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0821 04:29:12.769386    4441 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17102-920/.minikube
	I0821 04:29:12.772327    4441 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0821 04:29:12.775623    4441 config.go:182] Loaded profile config "kubernetes-upgrade-171000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0821 04:29:12.775858    4441 driver.go:373] Setting default libvirt URI to qemu:///system
	I0821 04:29:12.780405    4441 out.go:177] * Using the qemu2 driver based on existing profile
	I0821 04:29:12.787348    4441 start.go:298] selected driver: qemu2
	I0821 04:29:12.787354    4441 start.go:902] validating driver "qemu2" against &{Name:kubernetes-upgrade-171000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 Clu
sterName:kubernetes-upgrade-171000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0821 04:29:12.787413    4441 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0821 04:29:12.789463    4441 cni.go:84] Creating CNI manager for ""
	I0821 04:29:12.789477    4441 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0821 04:29:12.789492    4441 start_flags.go:319] config:
	{Name:kubernetes-upgrade-171000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0-rc.1 ClusterName:kubernetes-upgrade-171000 Namespace:default APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/o
pt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0821 04:29:12.793401    4441 iso.go:125] acquiring lock: {Name:mk813ea611542195bb0511881888be3fabc72ff6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 04:29:12.794865    4441 out.go:177] * Starting control plane node kubernetes-upgrade-171000 in cluster kubernetes-upgrade-171000
	I0821 04:29:12.802315    4441 preload.go:132] Checking if preload exists for k8s version v1.28.0-rc.1 and runtime docker
	I0821 04:29:12.802339    4441 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17102-920/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-rc.1-docker-overlay2-arm64.tar.lz4
	I0821 04:29:12.802347    4441 cache.go:57] Caching tarball of preloaded images
	I0821 04:29:12.802403    4441 preload.go:174] Found /Users/jenkins/minikube-integration/17102-920/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-rc.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0821 04:29:12.802408    4441 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.0-rc.1 on docker
	I0821 04:29:12.802456    4441 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/kubernetes-upgrade-171000/config.json ...
	I0821 04:29:12.802754    4441 start.go:365] acquiring machines lock for kubernetes-upgrade-171000: {Name:mk9b32d9fe994be32d77812db464b2cfa7bfb400 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0821 04:29:12.802778    4441 start.go:369] acquired machines lock for "kubernetes-upgrade-171000" in 18.333µs
	I0821 04:29:12.802787    4441 start.go:96] Skipping create...Using existing machine configuration
	I0821 04:29:12.802792    4441 fix.go:54] fixHost starting: 
	I0821 04:29:12.802905    4441 fix.go:102] recreateIfNeeded on kubernetes-upgrade-171000: state=Stopped err=<nil>
	W0821 04:29:12.802913    4441 fix.go:128] unexpected machine state, will restart: <nil>
	I0821 04:29:12.811328    4441 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-171000" ...
	I0821 04:29:12.815187    4441 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17102-920/.minikube/machines/kubernetes-upgrade-171000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/kubernetes-upgrade-171000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17102-920/.minikube/machines/kubernetes-upgrade-171000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:8a:9f:4a:57:e8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17102-920/.minikube/machines/kubernetes-upgrade-171000/disk.qcow2
	I0821 04:29:12.817010    4441 main.go:141] libmachine: STDOUT: 
	I0821 04:29:12.817023    4441 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0821 04:29:12.817051    4441 fix.go:56] fixHost completed within 14.258291ms
	I0821 04:29:12.817057    4441 start.go:83] releasing machines lock for "kubernetes-upgrade-171000", held for 14.274959ms
	W0821 04:29:12.817093    4441 start.go:672] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0821 04:29:12.817134    4441 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0821 04:29:12.817138    4441 start.go:687] Will try again in 5 seconds ...
	I0821 04:29:17.819187    4441 start.go:365] acquiring machines lock for kubernetes-upgrade-171000: {Name:mk9b32d9fe994be32d77812db464b2cfa7bfb400 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0821 04:29:17.819540    4441 start.go:369] acquired machines lock for "kubernetes-upgrade-171000" in 292.75µs
	I0821 04:29:17.819669    4441 start.go:96] Skipping create...Using existing machine configuration
	I0821 04:29:17.819686    4441 fix.go:54] fixHost starting: 
	I0821 04:29:17.820322    4441 fix.go:102] recreateIfNeeded on kubernetes-upgrade-171000: state=Stopped err=<nil>
	W0821 04:29:17.820365    4441 fix.go:128] unexpected machine state, will restart: <nil>
	I0821 04:29:17.824755    4441 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-171000" ...
	I0821 04:29:17.828845    4441 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17102-920/.minikube/machines/kubernetes-upgrade-171000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/kubernetes-upgrade-171000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17102-920/.minikube/machines/kubernetes-upgrade-171000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:8a:9f:4a:57:e8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17102-920/.minikube/machines/kubernetes-upgrade-171000/disk.qcow2
	I0821 04:29:17.837253    4441 main.go:141] libmachine: STDOUT: 
	I0821 04:29:17.837300    4441 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0821 04:29:17.837364    4441 fix.go:56] fixHost completed within 17.67925ms
	I0821 04:29:17.837387    4441 start.go:83] releasing machines lock for "kubernetes-upgrade-171000", held for 17.823875ms
	W0821 04:29:17.837702    4441 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-171000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-171000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0821 04:29:17.845733    4441 out.go:177] 
	W0821 04:29:17.849868    4441 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0821 04:29:17.849892    4441 out.go:239] * 
	* 
	W0821 04:29:17.852557    4441 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0821 04:29:17.860827    4441 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:257: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-171000 --memory=2200 --kubernetes-version=v1.28.0-rc.1 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:260: (dbg) Run:  kubectl --context kubernetes-upgrade-171000 version --output=json
version_upgrade_test.go:260: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-171000 version --output=json: exit status 1 (59.903709ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-171000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:262: error running kubectl: exit status 1
panic.go:522: *** TestKubernetesUpgrade FAILED at 2023-08-21 04:29:17.934609 -0700 PDT m=+3362.950652793
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-171000 -n kubernetes-upgrade-171000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-171000 -n kubernetes-upgrade-171000: exit status 7 (32.362708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-171000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-171000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-171000
--- FAIL: TestKubernetesUpgrade (15.39s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.79s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.31.2 on darwin (arm64)
- MINIKUBE_LOCATION=17102
- KUBECONFIG=/Users/jenkins/minikube-integration/17102-920/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2547436578/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.79s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.27s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.31.2 on darwin (arm64)
- MINIKUBE_LOCATION=17102
- KUBECONFIG=/Users/jenkins/minikube-integration/17102-920/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current38374921/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.27s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (167.53s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
version_upgrade_test.go:167: v1.6.2 release installation failed: bad response code: 404
--- FAIL: TestStoppedBinaryUpgrade/Setup (167.53s)

                                                
                                    
x
+
TestPause/serial/Start (9.82s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-473000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-473000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.752493042s)

                                                
                                                
-- stdout --
	* [pause-473000] minikube v1.31.2 on Darwin 13.5 (arm64)
	  - MINIKUBE_LOCATION=17102
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17102-920/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17102-920/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node pause-473000 in cluster pause-473000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-473000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-473000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-473000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-473000 -n pause-473000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-473000 -n pause-473000: exit status 7 (68.313708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-473000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (9.82s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (10.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-809000 --driver=qemu2 
E0821 04:29:39.157409    1362 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/ingress-addon-legacy-717000/client.crt: no such file or directory
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-809000 --driver=qemu2 : exit status 80 (10.045723208s)

                                                
                                                
-- stdout --
	* [NoKubernetes-809000] minikube v1.31.2 on Darwin 13.5 (arm64)
	  - MINIKUBE_LOCATION=17102
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17102-920/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17102-920/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node NoKubernetes-809000 in cluster NoKubernetes-809000
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-809000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-809000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-809000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-809000 -n NoKubernetes-809000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-809000 -n NoKubernetes-809000: exit status 7 (70.138917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-809000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (10.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-809000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-809000 --no-kubernetes --driver=qemu2 : exit status 80 (5.241752417s)

                                                
                                                
-- stdout --
	* [NoKubernetes-809000] minikube v1.31.2 on Darwin 13.5 (arm64)
	  - MINIKUBE_LOCATION=17102
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17102-920/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17102-920/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-809000
	* Restarting existing qemu2 VM for "NoKubernetes-809000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-809000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-809000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-809000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-809000 -n NoKubernetes-809000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-809000 -n NoKubernetes-809000: exit status 7 (68.54475ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-809000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-809000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-809000 --no-kubernetes --driver=qemu2 : exit status 80 (5.247861125s)

                                                
                                                
-- stdout --
	* [NoKubernetes-809000] minikube v1.31.2 on Darwin 13.5 (arm64)
	  - MINIKUBE_LOCATION=17102
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17102-920/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17102-920/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-809000
	* Restarting existing qemu2 VM for "NoKubernetes-809000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-809000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-809000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-809000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-809000 -n NoKubernetes-809000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-809000 -n NoKubernetes-809000: exit status 7 (68.722291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-809000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-809000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-809000 --driver=qemu2 : exit status 80 (5.237490958s)

                                                
                                                
-- stdout --
	* [NoKubernetes-809000] minikube v1.31.2 on Darwin 13.5 (arm64)
	  - MINIKUBE_LOCATION=17102
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17102-920/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17102-920/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-809000
	* Restarting existing qemu2 VM for "NoKubernetes-809000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-809000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-809000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-809000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-809000 -n NoKubernetes-809000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-809000 -n NoKubernetes-809000: exit status 7 (67.754333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-809000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-797000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
E0821 04:30:06.865881    1362 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/ingress-addon-legacy-717000/client.crt: no such file or directory
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-797000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.735786125s)

                                                
                                                
-- stdout --
	* [auto-797000] minikube v1.31.2 on Darwin 13.5 (arm64)
	  - MINIKUBE_LOCATION=17102
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17102-920/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17102-920/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node auto-797000 in cluster auto-797000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-797000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0821 04:30:00.761368    4568 out.go:296] Setting OutFile to fd 1 ...
	I0821 04:30:00.761487    4568 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 04:30:00.761490    4568 out.go:309] Setting ErrFile to fd 2...
	I0821 04:30:00.761492    4568 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 04:30:00.761601    4568 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17102-920/.minikube/bin
	I0821 04:30:00.762647    4568 out.go:303] Setting JSON to false
	I0821 04:30:00.778263    4568 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3574,"bootTime":1692613826,"procs":420,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0821 04:30:00.778337    4568 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0821 04:30:00.784157    4568 out.go:177] * [auto-797000] minikube v1.31.2 on Darwin 13.5 (arm64)
	I0821 04:30:00.796178    4568 out.go:177]   - MINIKUBE_LOCATION=17102
	I0821 04:30:00.792166    4568 notify.go:220] Checking for updates...
	I0821 04:30:00.800150    4568 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17102-920/kubeconfig
	I0821 04:30:00.804161    4568 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0821 04:30:00.807078    4568 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0821 04:30:00.810196    4568 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17102-920/.minikube
	I0821 04:30:00.813158    4568 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0821 04:30:00.814637    4568 config.go:182] Loaded profile config "multinode-806000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0821 04:30:00.814681    4568 driver.go:373] Setting default libvirt URI to qemu:///system
	I0821 04:30:00.818186    4568 out.go:177] * Using the qemu2 driver based on user configuration
	I0821 04:30:00.828163    4568 start.go:298] selected driver: qemu2
	I0821 04:30:00.828170    4568 start.go:902] validating driver "qemu2" against <nil>
	I0821 04:30:00.828176    4568 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0821 04:30:00.830173    4568 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0821 04:30:00.833108    4568 out.go:177] * Automatically selected the socket_vmnet network
	I0821 04:30:00.836270    4568 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0821 04:30:00.836289    4568 cni.go:84] Creating CNI manager for ""
	I0821 04:30:00.836295    4568 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0821 04:30:00.836298    4568 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0821 04:30:00.836305    4568 start_flags.go:319] config:
	{Name:auto-797000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:auto-797000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni F
eatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0821 04:30:00.840327    4568 iso.go:125] acquiring lock: {Name:mk813ea611542195bb0511881888be3fabc72ff6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 04:30:00.843106    4568 out.go:177] * Starting control plane node auto-797000 in cluster auto-797000
	I0821 04:30:00.851152    4568 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime docker
	I0821 04:30:00.851199    4568 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17102-920/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4
	I0821 04:30:00.851208    4568 cache.go:57] Caching tarball of preloaded images
	I0821 04:30:00.851262    4568 preload.go:174] Found /Users/jenkins/minikube-integration/17102-920/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0821 04:30:00.851267    4568 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.4 on docker
	I0821 04:30:00.851329    4568 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/auto-797000/config.json ...
	I0821 04:30:00.851342    4568 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/auto-797000/config.json: {Name:mkdf848be1aa77fcccf2ed6bdfc66056cdf3e040 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 04:30:00.851537    4568 start.go:365] acquiring machines lock for auto-797000: {Name:mk9b32d9fe994be32d77812db464b2cfa7bfb400 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0821 04:30:00.851569    4568 start.go:369] acquired machines lock for "auto-797000" in 26.375µs
	I0821 04:30:00.851580    4568 start.go:93] Provisioning new machine with config: &{Name:auto-797000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:au
to-797000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0821 04:30:00.851608    4568 start.go:125] createHost starting for "" (driver="qemu2")
	I0821 04:30:00.860168    4568 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0821 04:30:00.876238    4568 start.go:159] libmachine.API.Create for "auto-797000" (driver="qemu2")
	I0821 04:30:00.876264    4568 client.go:168] LocalClient.Create starting
	I0821 04:30:00.876319    4568 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca.pem
	I0821 04:30:00.876345    4568 main.go:141] libmachine: Decoding PEM data...
	I0821 04:30:00.876357    4568 main.go:141] libmachine: Parsing certificate...
	I0821 04:30:00.876402    4568 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17102-920/.minikube/certs/cert.pem
	I0821 04:30:00.876421    4568 main.go:141] libmachine: Decoding PEM data...
	I0821 04:30:00.876432    4568 main.go:141] libmachine: Parsing certificate...
	I0821 04:30:00.876741    4568 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17102-920/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17102-920/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0821 04:30:00.996732    4568 main.go:141] libmachine: Creating SSH key...
	I0821 04:30:01.140399    4568 main.go:141] libmachine: Creating Disk image...
	I0821 04:30:01.140406    4568 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0821 04:30:01.140548    4568 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17102-920/.minikube/machines/auto-797000/disk.qcow2.raw /Users/jenkins/minikube-integration/17102-920/.minikube/machines/auto-797000/disk.qcow2
	I0821 04:30:01.149056    4568 main.go:141] libmachine: STDOUT: 
	I0821 04:30:01.149068    4568 main.go:141] libmachine: STDERR: 
	I0821 04:30:01.149127    4568 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17102-920/.minikube/machines/auto-797000/disk.qcow2 +20000M
	I0821 04:30:01.156308    4568 main.go:141] libmachine: STDOUT: Image resized.
	
	I0821 04:30:01.156324    4568 main.go:141] libmachine: STDERR: 
	I0821 04:30:01.156342    4568 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17102-920/.minikube/machines/auto-797000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17102-920/.minikube/machines/auto-797000/disk.qcow2
	I0821 04:30:01.156348    4568 main.go:141] libmachine: Starting QEMU VM...
	I0821 04:30:01.156375    4568 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17102-920/.minikube/machines/auto-797000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/auto-797000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17102-920/.minikube/machines/auto-797000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:33:53:ac:72:f6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17102-920/.minikube/machines/auto-797000/disk.qcow2
	I0821 04:30:01.157939    4568 main.go:141] libmachine: STDOUT: 
	I0821 04:30:01.157950    4568 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0821 04:30:01.157970    4568 client.go:171] LocalClient.Create took 281.703417ms
	I0821 04:30:03.160122    4568 start.go:128] duration metric: createHost completed in 2.308535583s
	I0821 04:30:03.160192    4568 start.go:83] releasing machines lock for "auto-797000", held for 2.308658417s
	W0821 04:30:03.160261    4568 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0821 04:30:03.169703    4568 out.go:177] * Deleting "auto-797000" in qemu2 ...
	W0821 04:30:03.190737    4568 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0821 04:30:03.190790    4568 start.go:687] Will try again in 5 seconds ...
	I0821 04:30:08.192972    4568 start.go:365] acquiring machines lock for auto-797000: {Name:mk9b32d9fe994be32d77812db464b2cfa7bfb400 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0821 04:30:08.193370    4568 start.go:369] acquired machines lock for "auto-797000" in 281.291µs
	I0821 04:30:08.193526    4568 start.go:93] Provisioning new machine with config: &{Name:auto-797000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:au
to-797000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0821 04:30:08.193854    4568 start.go:125] createHost starting for "" (driver="qemu2")
	I0821 04:30:08.202483    4568 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0821 04:30:08.243464    4568 start.go:159] libmachine.API.Create for "auto-797000" (driver="qemu2")
	I0821 04:30:08.243515    4568 client.go:168] LocalClient.Create starting
	I0821 04:30:08.243630    4568 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca.pem
	I0821 04:30:08.243684    4568 main.go:141] libmachine: Decoding PEM data...
	I0821 04:30:08.243700    4568 main.go:141] libmachine: Parsing certificate...
	I0821 04:30:08.243768    4568 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17102-920/.minikube/certs/cert.pem
	I0821 04:30:08.243808    4568 main.go:141] libmachine: Decoding PEM data...
	I0821 04:30:08.243824    4568 main.go:141] libmachine: Parsing certificate...
	I0821 04:30:08.244357    4568 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17102-920/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17102-920/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0821 04:30:08.376647    4568 main.go:141] libmachine: Creating SSH key...
	I0821 04:30:08.409944    4568 main.go:141] libmachine: Creating Disk image...
	I0821 04:30:08.409950    4568 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0821 04:30:08.410132    4568 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17102-920/.minikube/machines/auto-797000/disk.qcow2.raw /Users/jenkins/minikube-integration/17102-920/.minikube/machines/auto-797000/disk.qcow2
	I0821 04:30:08.418740    4568 main.go:141] libmachine: STDOUT: 
	I0821 04:30:08.418763    4568 main.go:141] libmachine: STDERR: 
	I0821 04:30:08.418835    4568 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17102-920/.minikube/machines/auto-797000/disk.qcow2 +20000M
	I0821 04:30:08.425975    4568 main.go:141] libmachine: STDOUT: Image resized.
	
	I0821 04:30:08.425987    4568 main.go:141] libmachine: STDERR: 
	I0821 04:30:08.426001    4568 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17102-920/.minikube/machines/auto-797000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17102-920/.minikube/machines/auto-797000/disk.qcow2
	I0821 04:30:08.426006    4568 main.go:141] libmachine: Starting QEMU VM...
	I0821 04:30:08.426036    4568 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17102-920/.minikube/machines/auto-797000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/auto-797000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17102-920/.minikube/machines/auto-797000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:bf:74:12:e6:f6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17102-920/.minikube/machines/auto-797000/disk.qcow2
	I0821 04:30:08.427533    4568 main.go:141] libmachine: STDOUT: 
	I0821 04:30:08.427553    4568 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0821 04:30:08.427564    4568 client.go:171] LocalClient.Create took 184.045791ms
	I0821 04:30:10.429688    4568 start.go:128] duration metric: createHost completed in 2.23584825s
	I0821 04:30:10.429793    4568 start.go:83] releasing machines lock for "auto-797000", held for 2.236393334s
	W0821 04:30:10.430268    4568 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p auto-797000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-797000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0821 04:30:10.439767    4568 out.go:177] 
	W0821 04:30:10.443952    4568 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0821 04:30:10.443974    4568 out.go:239] * 
	* 
	W0821 04:30:10.446447    4568 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0821 04:30:10.455929    4568 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-797000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-797000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.853101583s)

                                                
                                                
-- stdout --
	* [kindnet-797000] minikube v1.31.2 on Darwin 13.5 (arm64)
	  - MINIKUBE_LOCATION=17102
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17102-920/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17102-920/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node kindnet-797000 in cluster kindnet-797000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-797000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0821 04:30:12.530445    4697 out.go:296] Setting OutFile to fd 1 ...
	I0821 04:30:12.530575    4697 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 04:30:12.530577    4697 out.go:309] Setting ErrFile to fd 2...
	I0821 04:30:12.530580    4697 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 04:30:12.530684    4697 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17102-920/.minikube/bin
	I0821 04:30:12.531716    4697 out.go:303] Setting JSON to false
	I0821 04:30:12.547098    4697 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3586,"bootTime":1692613826,"procs":419,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0821 04:30:12.547173    4697 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0821 04:30:12.551374    4697 out.go:177] * [kindnet-797000] minikube v1.31.2 on Darwin 13.5 (arm64)
	I0821 04:30:12.558265    4697 out.go:177]   - MINIKUBE_LOCATION=17102
	I0821 04:30:12.562269    4697 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17102-920/kubeconfig
	I0821 04:30:12.558305    4697 notify.go:220] Checking for updates...
	I0821 04:30:12.569251    4697 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0821 04:30:12.573274    4697 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0821 04:30:12.576279    4697 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17102-920/.minikube
	I0821 04:30:12.579252    4697 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0821 04:30:12.582623    4697 config.go:182] Loaded profile config "multinode-806000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0821 04:30:12.582667    4697 driver.go:373] Setting default libvirt URI to qemu:///system
	I0821 04:30:12.586174    4697 out.go:177] * Using the qemu2 driver based on user configuration
	I0821 04:30:12.593281    4697 start.go:298] selected driver: qemu2
	I0821 04:30:12.593289    4697 start.go:902] validating driver "qemu2" against <nil>
	I0821 04:30:12.593296    4697 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0821 04:30:12.595261    4697 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0821 04:30:12.599126    4697 out.go:177] * Automatically selected the socket_vmnet network
	I0821 04:30:12.602314    4697 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0821 04:30:12.602330    4697 cni.go:84] Creating CNI manager for "kindnet"
	I0821 04:30:12.602334    4697 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0821 04:30:12.602342    4697 start_flags.go:319] config:
	{Name:kindnet-797000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:kindnet-797000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin
:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0821 04:30:12.606615    4697 iso.go:125] acquiring lock: {Name:mk813ea611542195bb0511881888be3fabc72ff6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 04:30:12.614325    4697 out.go:177] * Starting control plane node kindnet-797000 in cluster kindnet-797000
	I0821 04:30:12.618249    4697 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime docker
	I0821 04:30:12.618276    4697 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17102-920/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4
	I0821 04:30:12.618301    4697 cache.go:57] Caching tarball of preloaded images
	I0821 04:30:12.618371    4697 preload.go:174] Found /Users/jenkins/minikube-integration/17102-920/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0821 04:30:12.618377    4697 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.4 on docker
	I0821 04:30:12.618450    4697 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/kindnet-797000/config.json ...
	I0821 04:30:12.618462    4697 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/kindnet-797000/config.json: {Name:mka5dbc2643757eb1b443a00973e58f5d05519dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 04:30:12.618670    4697 start.go:365] acquiring machines lock for kindnet-797000: {Name:mk9b32d9fe994be32d77812db464b2cfa7bfb400 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0821 04:30:12.618699    4697 start.go:369] acquired machines lock for "kindnet-797000" in 23.709µs
	I0821 04:30:12.618710    4697 start.go:93] Provisioning new machine with config: &{Name:kindnet-797000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName
:kindnet-797000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0821 04:30:12.618751    4697 start.go:125] createHost starting for "" (driver="qemu2")
	I0821 04:30:12.627227    4697 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0821 04:30:12.643448    4697 start.go:159] libmachine.API.Create for "kindnet-797000" (driver="qemu2")
	I0821 04:30:12.643473    4697 client.go:168] LocalClient.Create starting
	I0821 04:30:12.643534    4697 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca.pem
	I0821 04:30:12.643560    4697 main.go:141] libmachine: Decoding PEM data...
	I0821 04:30:12.643569    4697 main.go:141] libmachine: Parsing certificate...
	I0821 04:30:12.643610    4697 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17102-920/.minikube/certs/cert.pem
	I0821 04:30:12.643628    4697 main.go:141] libmachine: Decoding PEM data...
	I0821 04:30:12.643638    4697 main.go:141] libmachine: Parsing certificate...
	I0821 04:30:12.643931    4697 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17102-920/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17102-920/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0821 04:30:12.763398    4697 main.go:141] libmachine: Creating SSH key...
	I0821 04:30:12.884713    4697 main.go:141] libmachine: Creating Disk image...
	I0821 04:30:12.884719    4697 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0821 04:30:12.884854    4697 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17102-920/.minikube/machines/kindnet-797000/disk.qcow2.raw /Users/jenkins/minikube-integration/17102-920/.minikube/machines/kindnet-797000/disk.qcow2
	I0821 04:30:12.893533    4697 main.go:141] libmachine: STDOUT: 
	I0821 04:30:12.893550    4697 main.go:141] libmachine: STDERR: 
	I0821 04:30:12.893609    4697 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17102-920/.minikube/machines/kindnet-797000/disk.qcow2 +20000M
	I0821 04:30:12.900854    4697 main.go:141] libmachine: STDOUT: Image resized.
	
	I0821 04:30:12.900866    4697 main.go:141] libmachine: STDERR: 
	I0821 04:30:12.900885    4697 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17102-920/.minikube/machines/kindnet-797000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17102-920/.minikube/machines/kindnet-797000/disk.qcow2
	I0821 04:30:12.900901    4697 main.go:141] libmachine: Starting QEMU VM...
	I0821 04:30:12.900936    4697 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17102-920/.minikube/machines/kindnet-797000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/kindnet-797000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17102-920/.minikube/machines/kindnet-797000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:d5:2a:a5:77:7e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17102-920/.minikube/machines/kindnet-797000/disk.qcow2
	I0821 04:30:12.902500    4697 main.go:141] libmachine: STDOUT: 
	I0821 04:30:12.902513    4697 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0821 04:30:12.902531    4697 client.go:171] LocalClient.Create took 259.055208ms
	I0821 04:30:14.904724    4697 start.go:128] duration metric: createHost completed in 2.285982708s
	I0821 04:30:14.904811    4697 start.go:83] releasing machines lock for "kindnet-797000", held for 2.286146541s
	W0821 04:30:14.904915    4697 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0821 04:30:14.916043    4697 out.go:177] * Deleting "kindnet-797000" in qemu2 ...
	W0821 04:30:14.938384    4697 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0821 04:30:14.938418    4697 start.go:687] Will try again in 5 seconds ...
	I0821 04:30:19.940548    4697 start.go:365] acquiring machines lock for kindnet-797000: {Name:mk9b32d9fe994be32d77812db464b2cfa7bfb400 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0821 04:30:19.941126    4697 start.go:369] acquired machines lock for "kindnet-797000" in 446.75µs
	I0821 04:30:19.941243    4697 start.go:93] Provisioning new machine with config: &{Name:kindnet-797000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName
:kindnet-797000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0821 04:30:19.941560    4697 start.go:125] createHost starting for "" (driver="qemu2")
	I0821 04:30:19.950309    4697 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0821 04:30:19.994629    4697 start.go:159] libmachine.API.Create for "kindnet-797000" (driver="qemu2")
	I0821 04:30:19.994662    4697 client.go:168] LocalClient.Create starting
	I0821 04:30:19.994798    4697 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca.pem
	I0821 04:30:19.994862    4697 main.go:141] libmachine: Decoding PEM data...
	I0821 04:30:19.994878    4697 main.go:141] libmachine: Parsing certificate...
	I0821 04:30:19.994953    4697 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17102-920/.minikube/certs/cert.pem
	I0821 04:30:19.994990    4697 main.go:141] libmachine: Decoding PEM data...
	I0821 04:30:19.995004    4697 main.go:141] libmachine: Parsing certificate...
	I0821 04:30:19.995504    4697 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17102-920/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17102-920/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0821 04:30:20.127240    4697 main.go:141] libmachine: Creating SSH key...
	I0821 04:30:20.292857    4697 main.go:141] libmachine: Creating Disk image...
	I0821 04:30:20.292865    4697 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0821 04:30:20.293014    4697 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17102-920/.minikube/machines/kindnet-797000/disk.qcow2.raw /Users/jenkins/minikube-integration/17102-920/.minikube/machines/kindnet-797000/disk.qcow2
	I0821 04:30:20.301882    4697 main.go:141] libmachine: STDOUT: 
	I0821 04:30:20.301896    4697 main.go:141] libmachine: STDERR: 
	I0821 04:30:20.301946    4697 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17102-920/.minikube/machines/kindnet-797000/disk.qcow2 +20000M
	I0821 04:30:20.309185    4697 main.go:141] libmachine: STDOUT: Image resized.
	
	I0821 04:30:20.309197    4697 main.go:141] libmachine: STDERR: 
	I0821 04:30:20.309208    4697 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17102-920/.minikube/machines/kindnet-797000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17102-920/.minikube/machines/kindnet-797000/disk.qcow2
	I0821 04:30:20.309216    4697 main.go:141] libmachine: Starting QEMU VM...
	I0821 04:30:20.309248    4697 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17102-920/.minikube/machines/kindnet-797000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/kindnet-797000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17102-920/.minikube/machines/kindnet-797000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:0a:76:8c:80:a7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17102-920/.minikube/machines/kindnet-797000/disk.qcow2
	I0821 04:30:20.311968    4697 main.go:141] libmachine: STDOUT: 
	I0821 04:30:20.311986    4697 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0821 04:30:20.312000    4697 client.go:171] LocalClient.Create took 317.339458ms
	I0821 04:30:22.314153    4697 start.go:128] duration metric: createHost completed in 2.372610625s
	I0821 04:30:22.314205    4697 start.go:83] releasing machines lock for "kindnet-797000", held for 2.373102292s
	W0821 04:30:22.314555    4697 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-797000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-797000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0821 04:30:22.324161    4697 out.go:177] 
	W0821 04:30:22.331197    4697 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0821 04:30:22.331221    4697 out.go:239] * 
	* 
	W0821 04:30:22.334034    4697 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0821 04:30:22.343154    4697 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-797000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
E0821 04:30:32.517037    1362 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/client.crt: no such file or directory
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-797000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.722089s)

                                                
                                                
-- stdout --
	* [calico-797000] minikube v1.31.2 on Darwin 13.5 (arm64)
	  - MINIKUBE_LOCATION=17102
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17102-920/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17102-920/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node calico-797000 in cluster calico-797000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-797000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0821 04:30:24.554022    4811 out.go:296] Setting OutFile to fd 1 ...
	I0821 04:30:24.554142    4811 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 04:30:24.554145    4811 out.go:309] Setting ErrFile to fd 2...
	I0821 04:30:24.554148    4811 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 04:30:24.554252    4811 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17102-920/.minikube/bin
	I0821 04:30:24.555298    4811 out.go:303] Setting JSON to false
	I0821 04:30:24.570596    4811 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3598,"bootTime":1692613826,"procs":416,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0821 04:30:24.570677    4811 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0821 04:30:24.575963    4811 out.go:177] * [calico-797000] minikube v1.31.2 on Darwin 13.5 (arm64)
	I0821 04:30:24.583937    4811 out.go:177]   - MINIKUBE_LOCATION=17102
	I0821 04:30:24.588023    4811 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17102-920/kubeconfig
	I0821 04:30:24.584004    4811 notify.go:220] Checking for updates...
	I0821 04:30:24.594945    4811 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0821 04:30:24.598007    4811 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0821 04:30:24.601014    4811 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17102-920/.minikube
	I0821 04:30:24.604003    4811 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0821 04:30:24.607696    4811 config.go:182] Loaded profile config "multinode-806000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0821 04:30:24.607750    4811 driver.go:373] Setting default libvirt URI to qemu:///system
	I0821 04:30:24.611977    4811 out.go:177] * Using the qemu2 driver based on user configuration
	I0821 04:30:24.618951    4811 start.go:298] selected driver: qemu2
	I0821 04:30:24.618958    4811 start.go:902] validating driver "qemu2" against <nil>
	I0821 04:30:24.618969    4811 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0821 04:30:24.621074    4811 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0821 04:30:24.623988    4811 out.go:177] * Automatically selected the socket_vmnet network
	I0821 04:30:24.627955    4811 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0821 04:30:24.627974    4811 cni.go:84] Creating CNI manager for "calico"
	I0821 04:30:24.627978    4811 start_flags.go:314] Found "Calico" CNI - setting NetworkPlugin=cni
	I0821 04:30:24.627988    4811 start_flags.go:319] config:
	{Name:calico-797000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:calico-797000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:c
ni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0821 04:30:24.632184    4811 iso.go:125] acquiring lock: {Name:mk813ea611542195bb0511881888be3fabc72ff6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 04:30:24.639924    4811 out.go:177] * Starting control plane node calico-797000 in cluster calico-797000
	I0821 04:30:24.643988    4811 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime docker
	I0821 04:30:24.644009    4811 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17102-920/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4
	I0821 04:30:24.644024    4811 cache.go:57] Caching tarball of preloaded images
	I0821 04:30:24.644088    4811 preload.go:174] Found /Users/jenkins/minikube-integration/17102-920/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0821 04:30:24.644094    4811 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.4 on docker
	I0821 04:30:24.644180    4811 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/calico-797000/config.json ...
	I0821 04:30:24.644201    4811 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/calico-797000/config.json: {Name:mkdd9ffa0c6549f1a71cb9443059795b9e8e2b95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 04:30:24.644414    4811 start.go:365] acquiring machines lock for calico-797000: {Name:mk9b32d9fe994be32d77812db464b2cfa7bfb400 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0821 04:30:24.644446    4811 start.go:369] acquired machines lock for "calico-797000" in 25.583µs
	I0821 04:30:24.644457    4811 start.go:93] Provisioning new machine with config: &{Name:calico-797000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:
calico-797000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0821 04:30:24.644494    4811 start.go:125] createHost starting for "" (driver="qemu2")
	I0821 04:30:24.652995    4811 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0821 04:30:24.669283    4811 start.go:159] libmachine.API.Create for "calico-797000" (driver="qemu2")
	I0821 04:30:24.669304    4811 client.go:168] LocalClient.Create starting
	I0821 04:30:24.669365    4811 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca.pem
	I0821 04:30:24.669391    4811 main.go:141] libmachine: Decoding PEM data...
	I0821 04:30:24.669404    4811 main.go:141] libmachine: Parsing certificate...
	I0821 04:30:24.669447    4811 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17102-920/.minikube/certs/cert.pem
	I0821 04:30:24.669466    4811 main.go:141] libmachine: Decoding PEM data...
	I0821 04:30:24.669482    4811 main.go:141] libmachine: Parsing certificate...
	I0821 04:30:24.669827    4811 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17102-920/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17102-920/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0821 04:30:24.788690    4811 main.go:141] libmachine: Creating SSH key...
	I0821 04:30:24.904638    4811 main.go:141] libmachine: Creating Disk image...
	I0821 04:30:24.904644    4811 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0821 04:30:24.904792    4811 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17102-920/.minikube/machines/calico-797000/disk.qcow2.raw /Users/jenkins/minikube-integration/17102-920/.minikube/machines/calico-797000/disk.qcow2
	I0821 04:30:24.913320    4811 main.go:141] libmachine: STDOUT: 
	I0821 04:30:24.913337    4811 main.go:141] libmachine: STDERR: 
	I0821 04:30:24.913398    4811 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17102-920/.minikube/machines/calico-797000/disk.qcow2 +20000M
	I0821 04:30:24.920647    4811 main.go:141] libmachine: STDOUT: Image resized.
	
	I0821 04:30:24.920669    4811 main.go:141] libmachine: STDERR: 
	I0821 04:30:24.920684    4811 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17102-920/.minikube/machines/calico-797000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17102-920/.minikube/machines/calico-797000/disk.qcow2
	I0821 04:30:24.920690    4811 main.go:141] libmachine: Starting QEMU VM...
	I0821 04:30:24.920723    4811 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17102-920/.minikube/machines/calico-797000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/calico-797000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17102-920/.minikube/machines/calico-797000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:16:83:bb:21:ea -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17102-920/.minikube/machines/calico-797000/disk.qcow2
	I0821 04:30:24.922298    4811 main.go:141] libmachine: STDOUT: 
	I0821 04:30:24.922309    4811 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0821 04:30:24.922334    4811 client.go:171] LocalClient.Create took 253.025458ms
	I0821 04:30:26.924513    4811 start.go:128] duration metric: createHost completed in 2.280042584s
	I0821 04:30:26.924567    4811 start.go:83] releasing machines lock for "calico-797000", held for 2.280156417s
	W0821 04:30:26.924615    4811 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0821 04:30:26.932962    4811 out.go:177] * Deleting "calico-797000" in qemu2 ...
	W0821 04:30:26.953577    4811 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0821 04:30:26.953607    4811 start.go:687] Will try again in 5 seconds ...
	I0821 04:30:31.955819    4811 start.go:365] acquiring machines lock for calico-797000: {Name:mk9b32d9fe994be32d77812db464b2cfa7bfb400 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0821 04:30:31.956180    4811 start.go:369] acquired machines lock for "calico-797000" in 274.333µs
	I0821 04:30:31.956291    4811 start.go:93] Provisioning new machine with config: &{Name:calico-797000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:
calico-797000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0821 04:30:31.956664    4811 start.go:125] createHost starting for "" (driver="qemu2")
	I0821 04:30:31.967341    4811 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0821 04:30:32.013513    4811 start.go:159] libmachine.API.Create for "calico-797000" (driver="qemu2")
	I0821 04:30:32.013579    4811 client.go:168] LocalClient.Create starting
	I0821 04:30:32.013704    4811 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca.pem
	I0821 04:30:32.013775    4811 main.go:141] libmachine: Decoding PEM data...
	I0821 04:30:32.013800    4811 main.go:141] libmachine: Parsing certificate...
	I0821 04:30:32.013872    4811 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17102-920/.minikube/certs/cert.pem
	I0821 04:30:32.013914    4811 main.go:141] libmachine: Decoding PEM data...
	I0821 04:30:32.013930    4811 main.go:141] libmachine: Parsing certificate...
	I0821 04:30:32.014423    4811 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17102-920/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17102-920/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0821 04:30:32.144358    4811 main.go:141] libmachine: Creating SSH key...
	I0821 04:30:32.191533    4811 main.go:141] libmachine: Creating Disk image...
	I0821 04:30:32.191538    4811 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0821 04:30:32.191701    4811 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17102-920/.minikube/machines/calico-797000/disk.qcow2.raw /Users/jenkins/minikube-integration/17102-920/.minikube/machines/calico-797000/disk.qcow2
	I0821 04:30:32.200254    4811 main.go:141] libmachine: STDOUT: 
	I0821 04:30:32.200268    4811 main.go:141] libmachine: STDERR: 
	I0821 04:30:32.200324    4811 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17102-920/.minikube/machines/calico-797000/disk.qcow2 +20000M
	I0821 04:30:32.207561    4811 main.go:141] libmachine: STDOUT: Image resized.
	
	I0821 04:30:32.207577    4811 main.go:141] libmachine: STDERR: 
	I0821 04:30:32.207588    4811 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17102-920/.minikube/machines/calico-797000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17102-920/.minikube/machines/calico-797000/disk.qcow2
	I0821 04:30:32.207593    4811 main.go:141] libmachine: Starting QEMU VM...
	I0821 04:30:32.207623    4811 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17102-920/.minikube/machines/calico-797000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/calico-797000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17102-920/.minikube/machines/calico-797000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:ae:28:ae:a0:14 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17102-920/.minikube/machines/calico-797000/disk.qcow2
	I0821 04:30:32.209177    4811 main.go:141] libmachine: STDOUT: 
	I0821 04:30:32.209193    4811 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0821 04:30:32.209203    4811 client.go:171] LocalClient.Create took 195.618958ms
	I0821 04:30:34.211385    4811 start.go:128] duration metric: createHost completed in 2.254690958s
	I0821 04:30:34.211459    4811 start.go:83] releasing machines lock for "calico-797000", held for 2.255298167s
	W0821 04:30:34.211799    4811 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p calico-797000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-797000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0821 04:30:34.219390    4811 out.go:177] 
	W0821 04:30:34.223513    4811 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0821 04:30:34.223560    4811 out.go:239] * 
	* 
	W0821 04:30:34.225908    4811 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0821 04:30:34.236475    4811 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-797000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-797000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.8473405s)

                                                
                                                
-- stdout --
	* [custom-flannel-797000] minikube v1.31.2 on Darwin 13.5 (arm64)
	  - MINIKUBE_LOCATION=17102
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17102-920/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17102-920/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node custom-flannel-797000 in cluster custom-flannel-797000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-797000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0821 04:30:36.567052    4930 out.go:296] Setting OutFile to fd 1 ...
	I0821 04:30:36.567187    4930 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 04:30:36.567190    4930 out.go:309] Setting ErrFile to fd 2...
	I0821 04:30:36.567192    4930 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 04:30:36.567302    4930 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17102-920/.minikube/bin
	I0821 04:30:36.568290    4930 out.go:303] Setting JSON to false
	I0821 04:30:36.583300    4930 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3610,"bootTime":1692613826,"procs":417,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0821 04:30:36.583369    4930 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0821 04:30:36.588679    4930 out.go:177] * [custom-flannel-797000] minikube v1.31.2 on Darwin 13.5 (arm64)
	I0821 04:30:36.596624    4930 out.go:177]   - MINIKUBE_LOCATION=17102
	I0821 04:30:36.600633    4930 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17102-920/kubeconfig
	I0821 04:30:36.596678    4930 notify.go:220] Checking for updates...
	I0821 04:30:36.606626    4930 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0821 04:30:36.609682    4930 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0821 04:30:36.612629    4930 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17102-920/.minikube
	I0821 04:30:36.615630    4930 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0821 04:30:36.618956    4930 config.go:182] Loaded profile config "multinode-806000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0821 04:30:36.619003    4930 driver.go:373] Setting default libvirt URI to qemu:///system
	I0821 04:30:36.623541    4930 out.go:177] * Using the qemu2 driver based on user configuration
	I0821 04:30:36.630578    4930 start.go:298] selected driver: qemu2
	I0821 04:30:36.630584    4930 start.go:902] validating driver "qemu2" against <nil>
	I0821 04:30:36.630589    4930 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0821 04:30:36.632550    4930 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0821 04:30:36.635526    4930 out.go:177] * Automatically selected the socket_vmnet network
	I0821 04:30:36.638708    4930 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0821 04:30:36.638734    4930 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0821 04:30:36.638745    4930 start_flags.go:314] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0821 04:30:36.638750    4930 start_flags.go:319] config:
	{Name:custom-flannel-797000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:custom-flannel-797000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket:
NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAu
thSock: SSHAgentPID:0}
	I0821 04:30:36.642703    4930 iso.go:125] acquiring lock: {Name:mk813ea611542195bb0511881888be3fabc72ff6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 04:30:36.649576    4930 out.go:177] * Starting control plane node custom-flannel-797000 in cluster custom-flannel-797000
	I0821 04:30:36.653613    4930 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime docker
	I0821 04:30:36.653637    4930 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17102-920/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4
	I0821 04:30:36.653655    4930 cache.go:57] Caching tarball of preloaded images
	I0821 04:30:36.653713    4930 preload.go:174] Found /Users/jenkins/minikube-integration/17102-920/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0821 04:30:36.653719    4930 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.4 on docker
	I0821 04:30:36.653788    4930 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/custom-flannel-797000/config.json ...
	I0821 04:30:36.653806    4930 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/custom-flannel-797000/config.json: {Name:mkdde519533a9c427c916bedf5ab7f839584b548 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 04:30:36.653986    4930 start.go:365] acquiring machines lock for custom-flannel-797000: {Name:mk9b32d9fe994be32d77812db464b2cfa7bfb400 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0821 04:30:36.654015    4930 start.go:369] acquired machines lock for "custom-flannel-797000" in 24.209µs
	I0821 04:30:36.654027    4930 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-797000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 Clus
terName:custom-flannel-797000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0821 04:30:36.654054    4930 start.go:125] createHost starting for "" (driver="qemu2")
	I0821 04:30:36.662650    4930 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0821 04:30:36.678081    4930 start.go:159] libmachine.API.Create for "custom-flannel-797000" (driver="qemu2")
	I0821 04:30:36.678111    4930 client.go:168] LocalClient.Create starting
	I0821 04:30:36.678156    4930 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca.pem
	I0821 04:30:36.678180    4930 main.go:141] libmachine: Decoding PEM data...
	I0821 04:30:36.678188    4930 main.go:141] libmachine: Parsing certificate...
	I0821 04:30:36.678224    4930 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17102-920/.minikube/certs/cert.pem
	I0821 04:30:36.678241    4930 main.go:141] libmachine: Decoding PEM data...
	I0821 04:30:36.678249    4930 main.go:141] libmachine: Parsing certificate...
	I0821 04:30:36.678552    4930 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17102-920/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17102-920/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0821 04:30:36.797333    4930 main.go:141] libmachine: Creating SSH key...
	I0821 04:30:36.846530    4930 main.go:141] libmachine: Creating Disk image...
	I0821 04:30:36.846538    4930 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0821 04:30:36.846692    4930 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17102-920/.minikube/machines/custom-flannel-797000/disk.qcow2.raw /Users/jenkins/minikube-integration/17102-920/.minikube/machines/custom-flannel-797000/disk.qcow2
	I0821 04:30:36.855079    4930 main.go:141] libmachine: STDOUT: 
	I0821 04:30:36.855092    4930 main.go:141] libmachine: STDERR: 
	I0821 04:30:36.855149    4930 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17102-920/.minikube/machines/custom-flannel-797000/disk.qcow2 +20000M
	I0821 04:30:36.862438    4930 main.go:141] libmachine: STDOUT: Image resized.
	
	I0821 04:30:36.862450    4930 main.go:141] libmachine: STDERR: 
	I0821 04:30:36.862469    4930 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17102-920/.minikube/machines/custom-flannel-797000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17102-920/.minikube/machines/custom-flannel-797000/disk.qcow2
	I0821 04:30:36.862476    4930 main.go:141] libmachine: Starting QEMU VM...
	I0821 04:30:36.862511    4930 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17102-920/.minikube/machines/custom-flannel-797000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/custom-flannel-797000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17102-920/.minikube/machines/custom-flannel-797000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:75:13:0b:73:28 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17102-920/.minikube/machines/custom-flannel-797000/disk.qcow2
	I0821 04:30:36.864062    4930 main.go:141] libmachine: STDOUT: 
	I0821 04:30:36.864075    4930 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0821 04:30:36.864096    4930 client.go:171] LocalClient.Create took 185.9815ms
	I0821 04:30:38.866284    4930 start.go:128] duration metric: createHost completed in 2.21224975s
	I0821 04:30:38.866340    4930 start.go:83] releasing machines lock for "custom-flannel-797000", held for 2.21235825s
	W0821 04:30:38.866388    4930 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0821 04:30:38.881984    4930 out.go:177] * Deleting "custom-flannel-797000" in qemu2 ...
	W0821 04:30:38.905868    4930 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0821 04:30:38.905902    4930 start.go:687] Will try again in 5 seconds ...
	I0821 04:30:43.908069    4930 start.go:365] acquiring machines lock for custom-flannel-797000: {Name:mk9b32d9fe994be32d77812db464b2cfa7bfb400 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0821 04:30:43.908576    4930 start.go:369] acquired machines lock for "custom-flannel-797000" in 396.042µs
	I0821 04:30:43.908723    4930 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-797000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 Clus
terName:custom-flannel-797000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0821 04:30:43.909025    4930 start.go:125] createHost starting for "" (driver="qemu2")
	I0821 04:30:43.918772    4930 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0821 04:30:43.968146    4930 start.go:159] libmachine.API.Create for "custom-flannel-797000" (driver="qemu2")
	I0821 04:30:43.968186    4930 client.go:168] LocalClient.Create starting
	I0821 04:30:43.968299    4930 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca.pem
	I0821 04:30:43.968372    4930 main.go:141] libmachine: Decoding PEM data...
	I0821 04:30:43.968388    4930 main.go:141] libmachine: Parsing certificate...
	I0821 04:30:43.968471    4930 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17102-920/.minikube/certs/cert.pem
	I0821 04:30:43.968511    4930 main.go:141] libmachine: Decoding PEM data...
	I0821 04:30:43.968524    4930 main.go:141] libmachine: Parsing certificate...
	I0821 04:30:43.969056    4930 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17102-920/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17102-920/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0821 04:30:44.101762    4930 main.go:141] libmachine: Creating SSH key...
	I0821 04:30:44.325879    4930 main.go:141] libmachine: Creating Disk image...
	I0821 04:30:44.325888    4930 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0821 04:30:44.326084    4930 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17102-920/.minikube/machines/custom-flannel-797000/disk.qcow2.raw /Users/jenkins/minikube-integration/17102-920/.minikube/machines/custom-flannel-797000/disk.qcow2
	I0821 04:30:44.335317    4930 main.go:141] libmachine: STDOUT: 
	I0821 04:30:44.335327    4930 main.go:141] libmachine: STDERR: 
	I0821 04:30:44.335394    4930 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17102-920/.minikube/machines/custom-flannel-797000/disk.qcow2 +20000M
	I0821 04:30:44.342677    4930 main.go:141] libmachine: STDOUT: Image resized.
	
	I0821 04:30:44.342695    4930 main.go:141] libmachine: STDERR: 
	I0821 04:30:44.342718    4930 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17102-920/.minikube/machines/custom-flannel-797000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17102-920/.minikube/machines/custom-flannel-797000/disk.qcow2
	I0821 04:30:44.342725    4930 main.go:141] libmachine: Starting QEMU VM...
	I0821 04:30:44.342767    4930 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17102-920/.minikube/machines/custom-flannel-797000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/custom-flannel-797000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17102-920/.minikube/machines/custom-flannel-797000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:d3:ea:d9:fe:5e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17102-920/.minikube/machines/custom-flannel-797000/disk.qcow2
	I0821 04:30:44.344357    4930 main.go:141] libmachine: STDOUT: 
	I0821 04:30:44.344370    4930 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0821 04:30:44.344386    4930 client.go:171] LocalClient.Create took 376.201833ms
	I0821 04:30:46.346556    4930 start.go:128] duration metric: createHost completed in 2.437493417s
	I0821 04:30:46.346610    4930 start.go:83] releasing machines lock for "custom-flannel-797000", held for 2.438056959s
	W0821 04:30:46.346973    4930 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-797000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-797000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0821 04:30:46.358529    4930 out.go:177] 
	W0821 04:30:46.362575    4930 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0821 04:30:46.362597    4930 out.go:239] * 
	* 
	W0821 04:30:46.365026    4930 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0821 04:30:46.373552    4930 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-797000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-797000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.661742625s)

                                                
                                                
-- stdout --
	* [false-797000] minikube v1.31.2 on Darwin 13.5 (arm64)
	  - MINIKUBE_LOCATION=17102
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17102-920/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17102-920/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node false-797000 in cluster false-797000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-797000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0821 04:30:48.680675    5050 out.go:296] Setting OutFile to fd 1 ...
	I0821 04:30:48.680789    5050 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 04:30:48.680792    5050 out.go:309] Setting ErrFile to fd 2...
	I0821 04:30:48.680794    5050 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 04:30:48.680902    5050 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17102-920/.minikube/bin
	I0821 04:30:48.681867    5050 out.go:303] Setting JSON to false
	I0821 04:30:48.696960    5050 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3622,"bootTime":1692613826,"procs":418,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0821 04:30:48.697019    5050 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0821 04:30:48.705209    5050 out.go:177] * [false-797000] minikube v1.31.2 on Darwin 13.5 (arm64)
	I0821 04:30:48.709297    5050 out.go:177]   - MINIKUBE_LOCATION=17102
	I0821 04:30:48.713257    5050 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17102-920/kubeconfig
	I0821 04:30:48.709351    5050 notify.go:220] Checking for updates...
	I0821 04:30:48.719263    5050 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0821 04:30:48.722227    5050 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0821 04:30:48.725209    5050 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17102-920/.minikube
	I0821 04:30:48.728261    5050 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0821 04:30:48.731521    5050 config.go:182] Loaded profile config "multinode-806000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0821 04:30:48.731582    5050 driver.go:373] Setting default libvirt URI to qemu:///system
	I0821 04:30:48.736178    5050 out.go:177] * Using the qemu2 driver based on user configuration
	I0821 04:30:48.743280    5050 start.go:298] selected driver: qemu2
	I0821 04:30:48.743286    5050 start.go:902] validating driver "qemu2" against <nil>
	I0821 04:30:48.743292    5050 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0821 04:30:48.745286    5050 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0821 04:30:48.748228    5050 out.go:177] * Automatically selected the socket_vmnet network
	I0821 04:30:48.751345    5050 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0821 04:30:48.751369    5050 cni.go:84] Creating CNI manager for "false"
	I0821 04:30:48.751374    5050 start_flags.go:319] config:
	{Name:false-797000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:false-797000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: Fe
atureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0821 04:30:48.756186    5050 iso.go:125] acquiring lock: {Name:mk813ea611542195bb0511881888be3fabc72ff6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 04:30:48.763234    5050 out.go:177] * Starting control plane node false-797000 in cluster false-797000
	I0821 04:30:48.767231    5050 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime docker
	I0821 04:30:48.767248    5050 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17102-920/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4
	I0821 04:30:48.767259    5050 cache.go:57] Caching tarball of preloaded images
	I0821 04:30:48.767309    5050 preload.go:174] Found /Users/jenkins/minikube-integration/17102-920/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0821 04:30:48.767314    5050 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.4 on docker
	I0821 04:30:48.767388    5050 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/false-797000/config.json ...
	I0821 04:30:48.767400    5050 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/false-797000/config.json: {Name:mk691611099247e2fdba0702c8e57ef32a0ac783 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 04:30:48.767613    5050 start.go:365] acquiring machines lock for false-797000: {Name:mk9b32d9fe994be32d77812db464b2cfa7bfb400 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0821 04:30:48.767645    5050 start.go:369] acquired machines lock for "false-797000" in 25.917µs
	I0821 04:30:48.767657    5050 start.go:93] Provisioning new machine with config: &{Name:false-797000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:f
alse-797000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0821 04:30:48.767687    5050 start.go:125] createHost starting for "" (driver="qemu2")
	I0821 04:30:48.776202    5050 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0821 04:30:48.792445    5050 start.go:159] libmachine.API.Create for "false-797000" (driver="qemu2")
	I0821 04:30:48.792465    5050 client.go:168] LocalClient.Create starting
	I0821 04:30:48.792520    5050 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca.pem
	I0821 04:30:48.792550    5050 main.go:141] libmachine: Decoding PEM data...
	I0821 04:30:48.792566    5050 main.go:141] libmachine: Parsing certificate...
	I0821 04:30:48.792609    5050 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17102-920/.minikube/certs/cert.pem
	I0821 04:30:48.792628    5050 main.go:141] libmachine: Decoding PEM data...
	I0821 04:30:48.792638    5050 main.go:141] libmachine: Parsing certificate...
	I0821 04:30:48.792960    5050 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17102-920/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17102-920/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0821 04:30:48.912303    5050 main.go:141] libmachine: Creating SSH key...
	I0821 04:30:48.965299    5050 main.go:141] libmachine: Creating Disk image...
	I0821 04:30:48.965304    5050 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0821 04:30:48.965439    5050 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17102-920/.minikube/machines/false-797000/disk.qcow2.raw /Users/jenkins/minikube-integration/17102-920/.minikube/machines/false-797000/disk.qcow2
	I0821 04:30:48.973891    5050 main.go:141] libmachine: STDOUT: 
	I0821 04:30:48.973904    5050 main.go:141] libmachine: STDERR: 
	I0821 04:30:48.973961    5050 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17102-920/.minikube/machines/false-797000/disk.qcow2 +20000M
	I0821 04:30:48.981108    5050 main.go:141] libmachine: STDOUT: Image resized.
	
	I0821 04:30:48.981121    5050 main.go:141] libmachine: STDERR: 
	I0821 04:30:48.981137    5050 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17102-920/.minikube/machines/false-797000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17102-920/.minikube/machines/false-797000/disk.qcow2
	I0821 04:30:48.981149    5050 main.go:141] libmachine: Starting QEMU VM...
	I0821 04:30:48.981190    5050 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17102-920/.minikube/machines/false-797000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/false-797000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17102-920/.minikube/machines/false-797000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:0f:98:cc:76:06 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17102-920/.minikube/machines/false-797000/disk.qcow2
	I0821 04:30:48.982711    5050 main.go:141] libmachine: STDOUT: 
	I0821 04:30:48.982722    5050 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0821 04:30:48.982740    5050 client.go:171] LocalClient.Create took 190.270459ms
	I0821 04:30:50.984888    5050 start.go:128] duration metric: createHost completed in 2.21721775s
	I0821 04:30:50.984945    5050 start.go:83] releasing machines lock for "false-797000", held for 2.217333875s
	W0821 04:30:50.984993    5050 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0821 04:30:50.995289    5050 out.go:177] * Deleting "false-797000" in qemu2 ...
	W0821 04:30:51.016249    5050 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0821 04:30:51.016272    5050 start.go:687] Will try again in 5 seconds ...
	I0821 04:30:56.018487    5050 start.go:365] acquiring machines lock for false-797000: {Name:mk9b32d9fe994be32d77812db464b2cfa7bfb400 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0821 04:30:56.019006    5050 start.go:369] acquired machines lock for "false-797000" in 417.083µs
	I0821 04:30:56.019167    5050 start.go:93] Provisioning new machine with config: &{Name:false-797000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:f
alse-797000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0821 04:30:56.019459    5050 start.go:125] createHost starting for "" (driver="qemu2")
	I0821 04:30:56.029043    5050 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0821 04:30:56.079606    5050 start.go:159] libmachine.API.Create for "false-797000" (driver="qemu2")
	I0821 04:30:56.079663    5050 client.go:168] LocalClient.Create starting
	I0821 04:30:56.079781    5050 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca.pem
	I0821 04:30:56.079837    5050 main.go:141] libmachine: Decoding PEM data...
	I0821 04:30:56.079856    5050 main.go:141] libmachine: Parsing certificate...
	I0821 04:30:56.079948    5050 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17102-920/.minikube/certs/cert.pem
	I0821 04:30:56.079990    5050 main.go:141] libmachine: Decoding PEM data...
	I0821 04:30:56.080006    5050 main.go:141] libmachine: Parsing certificate...
	I0821 04:30:56.080541    5050 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17102-920/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17102-920/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0821 04:30:56.213069    5050 main.go:141] libmachine: Creating SSH key...
	I0821 04:30:56.259040    5050 main.go:141] libmachine: Creating Disk image...
	I0821 04:30:56.259045    5050 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0821 04:30:56.259197    5050 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17102-920/.minikube/machines/false-797000/disk.qcow2.raw /Users/jenkins/minikube-integration/17102-920/.minikube/machines/false-797000/disk.qcow2
	I0821 04:30:56.267659    5050 main.go:141] libmachine: STDOUT: 
	I0821 04:30:56.267672    5050 main.go:141] libmachine: STDERR: 
	I0821 04:30:56.267733    5050 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17102-920/.minikube/machines/false-797000/disk.qcow2 +20000M
	I0821 04:30:56.274873    5050 main.go:141] libmachine: STDOUT: Image resized.
	
	I0821 04:30:56.274882    5050 main.go:141] libmachine: STDERR: 
	I0821 04:30:56.274895    5050 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17102-920/.minikube/machines/false-797000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17102-920/.minikube/machines/false-797000/disk.qcow2
	I0821 04:30:56.274902    5050 main.go:141] libmachine: Starting QEMU VM...
	I0821 04:30:56.274937    5050 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17102-920/.minikube/machines/false-797000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/false-797000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17102-920/.minikube/machines/false-797000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:a8:b1:13:a4:4f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17102-920/.minikube/machines/false-797000/disk.qcow2
	I0821 04:30:56.276454    5050 main.go:141] libmachine: STDOUT: 
	I0821 04:30:56.276465    5050 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0821 04:30:56.276486    5050 client.go:171] LocalClient.Create took 196.810916ms
	I0821 04:30:58.278604    5050 start.go:128] duration metric: createHost completed in 2.259164125s
	I0821 04:30:58.278664    5050 start.go:83] releasing machines lock for "false-797000", held for 2.25967675s
	W0821 04:30:58.279153    5050 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p false-797000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-797000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0821 04:30:58.289679    5050 out.go:177] 
	W0821 04:30:58.292856    5050 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0821 04:30:58.292900    5050 out.go:239] * 
	* 
	W0821 04:30:58.295591    5050 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0821 04:30:58.306204    5050 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-797000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-797000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.764393s)

                                                
                                                
-- stdout --
	* [enable-default-cni-797000] minikube v1.31.2 on Darwin 13.5 (arm64)
	  - MINIKUBE_LOCATION=17102
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17102-920/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17102-920/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node enable-default-cni-797000 in cluster enable-default-cni-797000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-797000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0821 04:31:00.453039    5160 out.go:296] Setting OutFile to fd 1 ...
	I0821 04:31:00.453156    5160 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 04:31:00.453158    5160 out.go:309] Setting ErrFile to fd 2...
	I0821 04:31:00.453161    5160 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 04:31:00.453275    5160 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17102-920/.minikube/bin
	I0821 04:31:00.454291    5160 out.go:303] Setting JSON to false
	I0821 04:31:00.469408    5160 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3634,"bootTime":1692613826,"procs":418,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0821 04:31:00.469504    5160 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0821 04:31:00.475189    5160 out.go:177] * [enable-default-cni-797000] minikube v1.31.2 on Darwin 13.5 (arm64)
	I0821 04:31:00.483184    5160 out.go:177]   - MINIKUBE_LOCATION=17102
	I0821 04:31:00.487133    5160 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17102-920/kubeconfig
	I0821 04:31:00.483258    5160 notify.go:220] Checking for updates...
	I0821 04:31:00.491192    5160 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0821 04:31:00.492533    5160 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0821 04:31:00.495156    5160 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17102-920/.minikube
	I0821 04:31:00.498195    5160 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0821 04:31:00.501954    5160 config.go:182] Loaded profile config "multinode-806000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0821 04:31:00.502019    5160 driver.go:373] Setting default libvirt URI to qemu:///system
	I0821 04:31:00.506157    5160 out.go:177] * Using the qemu2 driver based on user configuration
	I0821 04:31:00.513160    5160 start.go:298] selected driver: qemu2
	I0821 04:31:00.513166    5160 start.go:902] validating driver "qemu2" against <nil>
	I0821 04:31:00.513172    5160 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0821 04:31:00.515120    5160 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0821 04:31:00.518093    5160 out.go:177] * Automatically selected the socket_vmnet network
	E0821 04:31:00.522177    5160 start_flags.go:453] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0821 04:31:00.522195    5160 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0821 04:31:00.522217    5160 cni.go:84] Creating CNI manager for "bridge"
	I0821 04:31:00.522220    5160 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0821 04:31:00.522225    5160 start_flags.go:319] config:
	{Name:enable-default-cni-797000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:enable-default-cni-797000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHA
gentPID:0}
	I0821 04:31:00.526497    5160 iso.go:125] acquiring lock: {Name:mk813ea611542195bb0511881888be3fabc72ff6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 04:31:00.534151    5160 out.go:177] * Starting control plane node enable-default-cni-797000 in cluster enable-default-cni-797000
	I0821 04:31:00.538164    5160 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime docker
	I0821 04:31:00.538188    5160 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17102-920/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4
	I0821 04:31:00.538205    5160 cache.go:57] Caching tarball of preloaded images
	I0821 04:31:00.538263    5160 preload.go:174] Found /Users/jenkins/minikube-integration/17102-920/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0821 04:31:00.538269    5160 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.4 on docker
	I0821 04:31:00.538346    5160 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/enable-default-cni-797000/config.json ...
	I0821 04:31:00.538360    5160 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/enable-default-cni-797000/config.json: {Name:mka0467e56484ef351cff531cb1fc52e2e3ac873 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 04:31:00.538559    5160 start.go:365] acquiring machines lock for enable-default-cni-797000: {Name:mk9b32d9fe994be32d77812db464b2cfa7bfb400 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0821 04:31:00.538591    5160 start.go:369] acquired machines lock for "enable-default-cni-797000" in 23.5µs
	I0821 04:31:00.538602    5160 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-797000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4
ClusterName:enable-default-cni-797000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0821 04:31:00.538644    5160 start.go:125] createHost starting for "" (driver="qemu2")
	I0821 04:31:00.547142    5160 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0821 04:31:00.562712    5160 start.go:159] libmachine.API.Create for "enable-default-cni-797000" (driver="qemu2")
	I0821 04:31:00.562746    5160 client.go:168] LocalClient.Create starting
	I0821 04:31:00.562799    5160 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca.pem
	I0821 04:31:00.562822    5160 main.go:141] libmachine: Decoding PEM data...
	I0821 04:31:00.562838    5160 main.go:141] libmachine: Parsing certificate...
	I0821 04:31:00.562877    5160 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17102-920/.minikube/certs/cert.pem
	I0821 04:31:00.562895    5160 main.go:141] libmachine: Decoding PEM data...
	I0821 04:31:00.562903    5160 main.go:141] libmachine: Parsing certificate...
	I0821 04:31:00.563226    5160 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17102-920/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17102-920/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0821 04:31:00.682787    5160 main.go:141] libmachine: Creating SSH key...
	I0821 04:31:00.831521    5160 main.go:141] libmachine: Creating Disk image...
	I0821 04:31:00.831527    5160 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0821 04:31:00.831732    5160 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17102-920/.minikube/machines/enable-default-cni-797000/disk.qcow2.raw /Users/jenkins/minikube-integration/17102-920/.minikube/machines/enable-default-cni-797000/disk.qcow2
	I0821 04:31:00.840485    5160 main.go:141] libmachine: STDOUT: 
	I0821 04:31:00.840498    5160 main.go:141] libmachine: STDERR: 
	I0821 04:31:00.840548    5160 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17102-920/.minikube/machines/enable-default-cni-797000/disk.qcow2 +20000M
	I0821 04:31:00.847740    5160 main.go:141] libmachine: STDOUT: Image resized.
	
	I0821 04:31:00.847759    5160 main.go:141] libmachine: STDERR: 
	I0821 04:31:00.847780    5160 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17102-920/.minikube/machines/enable-default-cni-797000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17102-920/.minikube/machines/enable-default-cni-797000/disk.qcow2
	I0821 04:31:00.847785    5160 main.go:141] libmachine: Starting QEMU VM...
	I0821 04:31:00.847817    5160 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17102-920/.minikube/machines/enable-default-cni-797000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/enable-default-cni-797000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17102-920/.minikube/machines/enable-default-cni-797000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:1c:fa:4a:ec:c7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17102-920/.minikube/machines/enable-default-cni-797000/disk.qcow2
	I0821 04:31:00.849351    5160 main.go:141] libmachine: STDOUT: 
	I0821 04:31:00.849367    5160 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0821 04:31:00.849390    5160 client.go:171] LocalClient.Create took 286.642834ms
	I0821 04:31:02.851499    5160 start.go:128] duration metric: createHost completed in 2.312882584s
	I0821 04:31:02.851568    5160 start.go:83] releasing machines lock for "enable-default-cni-797000", held for 2.313011958s
	W0821 04:31:02.851671    5160 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0821 04:31:02.865461    5160 out.go:177] * Deleting "enable-default-cni-797000" in qemu2 ...
	W0821 04:31:02.886119    5160 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0821 04:31:02.886150    5160 start.go:687] Will try again in 5 seconds ...
	I0821 04:31:07.888348    5160 start.go:365] acquiring machines lock for enable-default-cni-797000: {Name:mk9b32d9fe994be32d77812db464b2cfa7bfb400 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0821 04:31:07.888835    5160 start.go:369] acquired machines lock for "enable-default-cni-797000" in 380.583µs
	I0821 04:31:07.888975    5160 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-797000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4
ClusterName:enable-default-cni-797000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0821 04:31:07.889285    5160 start.go:125] createHost starting for "" (driver="qemu2")
	I0821 04:31:07.900124    5160 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0821 04:31:07.947736    5160 start.go:159] libmachine.API.Create for "enable-default-cni-797000" (driver="qemu2")
	I0821 04:31:07.947791    5160 client.go:168] LocalClient.Create starting
	I0821 04:31:07.947918    5160 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca.pem
	I0821 04:31:07.947976    5160 main.go:141] libmachine: Decoding PEM data...
	I0821 04:31:07.947993    5160 main.go:141] libmachine: Parsing certificate...
	I0821 04:31:07.948052    5160 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17102-920/.minikube/certs/cert.pem
	I0821 04:31:07.948086    5160 main.go:141] libmachine: Decoding PEM data...
	I0821 04:31:07.948099    5160 main.go:141] libmachine: Parsing certificate...
	I0821 04:31:07.948617    5160 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17102-920/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17102-920/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0821 04:31:08.079711    5160 main.go:141] libmachine: Creating SSH key...
	I0821 04:31:08.133353    5160 main.go:141] libmachine: Creating Disk image...
	I0821 04:31:08.133358    5160 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0821 04:31:08.133486    5160 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17102-920/.minikube/machines/enable-default-cni-797000/disk.qcow2.raw /Users/jenkins/minikube-integration/17102-920/.minikube/machines/enable-default-cni-797000/disk.qcow2
	I0821 04:31:08.141972    5160 main.go:141] libmachine: STDOUT: 
	I0821 04:31:08.141986    5160 main.go:141] libmachine: STDERR: 
	I0821 04:31:08.142055    5160 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17102-920/.minikube/machines/enable-default-cni-797000/disk.qcow2 +20000M
	I0821 04:31:08.149246    5160 main.go:141] libmachine: STDOUT: Image resized.
	
	I0821 04:31:08.149260    5160 main.go:141] libmachine: STDERR: 
	I0821 04:31:08.149275    5160 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17102-920/.minikube/machines/enable-default-cni-797000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17102-920/.minikube/machines/enable-default-cni-797000/disk.qcow2
	I0821 04:31:08.149282    5160 main.go:141] libmachine: Starting QEMU VM...
	I0821 04:31:08.149322    5160 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17102-920/.minikube/machines/enable-default-cni-797000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/enable-default-cni-797000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17102-920/.minikube/machines/enable-default-cni-797000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:f9:af:f1:db:be -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17102-920/.minikube/machines/enable-default-cni-797000/disk.qcow2
	I0821 04:31:08.150928    5160 main.go:141] libmachine: STDOUT: 
	I0821 04:31:08.150941    5160 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0821 04:31:08.150954    5160 client.go:171] LocalClient.Create took 203.1625ms
	I0821 04:31:10.153098    5160 start.go:128] duration metric: createHost completed in 2.263770417s
	I0821 04:31:10.153156    5160 start.go:83] releasing machines lock for "enable-default-cni-797000", held for 2.264339667s
	W0821 04:31:10.153548    5160 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-797000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-797000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0821 04:31:10.162253    5160 out.go:177] 
	W0821 04:31:10.166189    5160 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0821 04:31:10.166235    5160 out.go:239] * 
	* 
	W0821 04:31:10.169123    5160 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0821 04:31:10.177166    5160 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-797000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-797000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.784396458s)

                                                
                                                
-- stdout --
	* [flannel-797000] minikube v1.31.2 on Darwin 13.5 (arm64)
	  - MINIKUBE_LOCATION=17102
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17102-920/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17102-920/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node flannel-797000 in cluster flannel-797000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-797000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0821 04:31:12.310539    5270 out.go:296] Setting OutFile to fd 1 ...
	I0821 04:31:12.310672    5270 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 04:31:12.310675    5270 out.go:309] Setting ErrFile to fd 2...
	I0821 04:31:12.310678    5270 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 04:31:12.310791    5270 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17102-920/.minikube/bin
	I0821 04:31:12.311793    5270 out.go:303] Setting JSON to false
	I0821 04:31:12.327050    5270 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3646,"bootTime":1692613826,"procs":412,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0821 04:31:12.327123    5270 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0821 04:31:12.332154    5270 out.go:177] * [flannel-797000] minikube v1.31.2 on Darwin 13.5 (arm64)
	I0821 04:31:12.339962    5270 out.go:177]   - MINIKUBE_LOCATION=17102
	I0821 04:31:12.344181    5270 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17102-920/kubeconfig
	I0821 04:31:12.340008    5270 notify.go:220] Checking for updates...
	I0821 04:31:12.352182    5270 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0821 04:31:12.355130    5270 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0821 04:31:12.358160    5270 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17102-920/.minikube
	I0821 04:31:12.361178    5270 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0821 04:31:12.362936    5270 config.go:182] Loaded profile config "multinode-806000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0821 04:31:12.363223    5270 driver.go:373] Setting default libvirt URI to qemu:///system
	I0821 04:31:12.367152    5270 out.go:177] * Using the qemu2 driver based on user configuration
	I0821 04:31:12.373988    5270 start.go:298] selected driver: qemu2
	I0821 04:31:12.373996    5270 start.go:902] validating driver "qemu2" against <nil>
	I0821 04:31:12.374004    5270 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0821 04:31:12.376054    5270 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0821 04:31:12.379174    5270 out.go:177] * Automatically selected the socket_vmnet network
	I0821 04:31:12.383209    5270 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0821 04:31:12.383229    5270 cni.go:84] Creating CNI manager for "flannel"
	I0821 04:31:12.383234    5270 start_flags.go:314] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0821 04:31:12.383240    5270 start_flags.go:319] config:
	{Name:flannel-797000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:flannel-797000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin
:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0821 04:31:12.387452    5270 iso.go:125] acquiring lock: {Name:mk813ea611542195bb0511881888be3fabc72ff6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 04:31:12.395115    5270 out.go:177] * Starting control plane node flannel-797000 in cluster flannel-797000
	I0821 04:31:12.399129    5270 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime docker
	I0821 04:31:12.399145    5270 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17102-920/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4
	I0821 04:31:12.399155    5270 cache.go:57] Caching tarball of preloaded images
	I0821 04:31:12.399207    5270 preload.go:174] Found /Users/jenkins/minikube-integration/17102-920/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0821 04:31:12.399214    5270 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.4 on docker
	I0821 04:31:12.399279    5270 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/flannel-797000/config.json ...
	I0821 04:31:12.399293    5270 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/flannel-797000/config.json: {Name:mkc7ea8520578cdb1bd6d561d391db47f121f483 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 04:31:12.399495    5270 start.go:365] acquiring machines lock for flannel-797000: {Name:mk9b32d9fe994be32d77812db464b2cfa7bfb400 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0821 04:31:12.399525    5270 start.go:369] acquired machines lock for "flannel-797000" in 23.834µs
	I0821 04:31:12.399536    5270 start.go:93] Provisioning new machine with config: &{Name:flannel-797000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName
:flannel-797000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0821 04:31:12.399574    5270 start.go:125] createHost starting for "" (driver="qemu2")
	I0821 04:31:12.407965    5270 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0821 04:31:12.424057    5270 start.go:159] libmachine.API.Create for "flannel-797000" (driver="qemu2")
	I0821 04:31:12.424082    5270 client.go:168] LocalClient.Create starting
	I0821 04:31:12.424168    5270 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca.pem
	I0821 04:31:12.424194    5270 main.go:141] libmachine: Decoding PEM data...
	I0821 04:31:12.424208    5270 main.go:141] libmachine: Parsing certificate...
	I0821 04:31:12.424247    5270 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17102-920/.minikube/certs/cert.pem
	I0821 04:31:12.424274    5270 main.go:141] libmachine: Decoding PEM data...
	I0821 04:31:12.424281    5270 main.go:141] libmachine: Parsing certificate...
	I0821 04:31:12.424611    5270 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17102-920/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17102-920/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0821 04:31:12.545446    5270 main.go:141] libmachine: Creating SSH key...
	I0821 04:31:12.684097    5270 main.go:141] libmachine: Creating Disk image...
	I0821 04:31:12.684103    5270 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0821 04:31:12.684246    5270 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17102-920/.minikube/machines/flannel-797000/disk.qcow2.raw /Users/jenkins/minikube-integration/17102-920/.minikube/machines/flannel-797000/disk.qcow2
	I0821 04:31:12.693022    5270 main.go:141] libmachine: STDOUT: 
	I0821 04:31:12.693039    5270 main.go:141] libmachine: STDERR: 
	I0821 04:31:12.693091    5270 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17102-920/.minikube/machines/flannel-797000/disk.qcow2 +20000M
	I0821 04:31:12.700204    5270 main.go:141] libmachine: STDOUT: Image resized.
	
	I0821 04:31:12.700218    5270 main.go:141] libmachine: STDERR: 
	I0821 04:31:12.700238    5270 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17102-920/.minikube/machines/flannel-797000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17102-920/.minikube/machines/flannel-797000/disk.qcow2
	I0821 04:31:12.700246    5270 main.go:141] libmachine: Starting QEMU VM...
	I0821 04:31:12.700280    5270 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17102-920/.minikube/machines/flannel-797000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/flannel-797000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17102-920/.minikube/machines/flannel-797000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:2c:d4:1a:c8:3c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17102-920/.minikube/machines/flannel-797000/disk.qcow2
	I0821 04:31:12.701831    5270 main.go:141] libmachine: STDOUT: 
	I0821 04:31:12.701845    5270 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0821 04:31:12.701864    5270 client.go:171] LocalClient.Create took 277.78175ms
	I0821 04:31:14.704063    5270 start.go:128] duration metric: createHost completed in 2.304499292s
	I0821 04:31:14.704158    5270 start.go:83] releasing machines lock for "flannel-797000", held for 2.304664042s
	W0821 04:31:14.704289    5270 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0821 04:31:14.710937    5270 out.go:177] * Deleting "flannel-797000" in qemu2 ...
	W0821 04:31:14.732162    5270 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0821 04:31:14.732233    5270 start.go:687] Will try again in 5 seconds ...
	I0821 04:31:19.734510    5270 start.go:365] acquiring machines lock for flannel-797000: {Name:mk9b32d9fe994be32d77812db464b2cfa7bfb400 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0821 04:31:19.734929    5270 start.go:369] acquired machines lock for "flannel-797000" in 328.042µs
	I0821 04:31:19.735045    5270 start.go:93] Provisioning new machine with config: &{Name:flannel-797000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName
:flannel-797000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0821 04:31:19.735369    5270 start.go:125] createHost starting for "" (driver="qemu2")
	I0821 04:31:19.745969    5270 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0821 04:31:19.793190    5270 start.go:159] libmachine.API.Create for "flannel-797000" (driver="qemu2")
	I0821 04:31:19.793231    5270 client.go:168] LocalClient.Create starting
	I0821 04:31:19.793355    5270 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca.pem
	I0821 04:31:19.793413    5270 main.go:141] libmachine: Decoding PEM data...
	I0821 04:31:19.793431    5270 main.go:141] libmachine: Parsing certificate...
	I0821 04:31:19.793492    5270 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17102-920/.minikube/certs/cert.pem
	I0821 04:31:19.793527    5270 main.go:141] libmachine: Decoding PEM data...
	I0821 04:31:19.793539    5270 main.go:141] libmachine: Parsing certificate...
	I0821 04:31:19.794048    5270 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17102-920/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17102-920/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0821 04:31:19.925006    5270 main.go:141] libmachine: Creating SSH key...
	I0821 04:31:20.008726    5270 main.go:141] libmachine: Creating Disk image...
	I0821 04:31:20.008731    5270 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0821 04:31:20.008889    5270 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17102-920/.minikube/machines/flannel-797000/disk.qcow2.raw /Users/jenkins/minikube-integration/17102-920/.minikube/machines/flannel-797000/disk.qcow2
	I0821 04:31:20.017513    5270 main.go:141] libmachine: STDOUT: 
	I0821 04:31:20.017527    5270 main.go:141] libmachine: STDERR: 
	I0821 04:31:20.017583    5270 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17102-920/.minikube/machines/flannel-797000/disk.qcow2 +20000M
	I0821 04:31:20.024861    5270 main.go:141] libmachine: STDOUT: Image resized.
	
	I0821 04:31:20.024875    5270 main.go:141] libmachine: STDERR: 
	I0821 04:31:20.024888    5270 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17102-920/.minikube/machines/flannel-797000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17102-920/.minikube/machines/flannel-797000/disk.qcow2
	I0821 04:31:20.024894    5270 main.go:141] libmachine: Starting QEMU VM...
	I0821 04:31:20.024944    5270 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17102-920/.minikube/machines/flannel-797000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/flannel-797000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17102-920/.minikube/machines/flannel-797000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:fc:67:ca:07:f1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17102-920/.minikube/machines/flannel-797000/disk.qcow2
	I0821 04:31:20.026497    5270 main.go:141] libmachine: STDOUT: 
	I0821 04:31:20.026512    5270 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0821 04:31:20.026530    5270 client.go:171] LocalClient.Create took 233.291625ms
	I0821 04:31:22.028749    5270 start.go:128] duration metric: createHost completed in 2.293399667s
	I0821 04:31:22.028815    5270 start.go:83] releasing machines lock for "flannel-797000", held for 2.293909042s
	W0821 04:31:22.029209    5270 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p flannel-797000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-797000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0821 04:31:22.039972    5270 out.go:177] 
	W0821 04:31:22.042897    5270 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0821 04:31:22.042920    5270 out.go:239] * 
	* 
	W0821 04:31:22.045392    5270 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0821 04:31:22.054988    5270 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-797000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-797000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.819568042s)

                                                
                                                
-- stdout --
	* [bridge-797000] minikube v1.31.2 on Darwin 13.5 (arm64)
	  - MINIKUBE_LOCATION=17102
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17102-920/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17102-920/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node bridge-797000 in cluster bridge-797000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-797000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0821 04:31:24.388139    5390 out.go:296] Setting OutFile to fd 1 ...
	I0821 04:31:24.388256    5390 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 04:31:24.388259    5390 out.go:309] Setting ErrFile to fd 2...
	I0821 04:31:24.388262    5390 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 04:31:24.388375    5390 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17102-920/.minikube/bin
	I0821 04:31:24.389340    5390 out.go:303] Setting JSON to false
	I0821 04:31:24.404503    5390 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3658,"bootTime":1692613826,"procs":413,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0821 04:31:24.404575    5390 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0821 04:31:24.409265    5390 out.go:177] * [bridge-797000] minikube v1.31.2 on Darwin 13.5 (arm64)
	I0821 04:31:24.416474    5390 out.go:177]   - MINIKUBE_LOCATION=17102
	I0821 04:31:24.419399    5390 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17102-920/kubeconfig
	I0821 04:31:24.416543    5390 notify.go:220] Checking for updates...
	I0821 04:31:24.426417    5390 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0821 04:31:24.430293    5390 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0821 04:31:24.433396    5390 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17102-920/.minikube
	I0821 04:31:24.436404    5390 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0821 04:31:24.439540    5390 config.go:182] Loaded profile config "multinode-806000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0821 04:31:24.439578    5390 driver.go:373] Setting default libvirt URI to qemu:///system
	I0821 04:31:24.442348    5390 out.go:177] * Using the qemu2 driver based on user configuration
	I0821 04:31:24.448315    5390 start.go:298] selected driver: qemu2
	I0821 04:31:24.448320    5390 start.go:902] validating driver "qemu2" against <nil>
	I0821 04:31:24.448326    5390 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0821 04:31:24.450373    5390 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0821 04:31:24.454477    5390 out.go:177] * Automatically selected the socket_vmnet network
	I0821 04:31:24.457467    5390 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0821 04:31:24.457486    5390 cni.go:84] Creating CNI manager for "bridge"
	I0821 04:31:24.457492    5390 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0821 04:31:24.457496    5390 start_flags.go:319] config:
	{Name:bridge-797000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:bridge-797000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:c
ni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0821 04:31:24.461737    5390 iso.go:125] acquiring lock: {Name:mk813ea611542195bb0511881888be3fabc72ff6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 04:31:24.469416    5390 out.go:177] * Starting control plane node bridge-797000 in cluster bridge-797000
	I0821 04:31:24.473199    5390 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime docker
	I0821 04:31:24.473218    5390 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17102-920/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4
	I0821 04:31:24.473229    5390 cache.go:57] Caching tarball of preloaded images
	I0821 04:31:24.473578    5390 preload.go:174] Found /Users/jenkins/minikube-integration/17102-920/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0821 04:31:24.473586    5390 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.4 on docker
	I0821 04:31:24.473658    5390 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/bridge-797000/config.json ...
	I0821 04:31:24.473672    5390 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/bridge-797000/config.json: {Name:mkcd5a465f478b66c88cadb2f0ac6529f0ea31e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 04:31:24.473878    5390 start.go:365] acquiring machines lock for bridge-797000: {Name:mk9b32d9fe994be32d77812db464b2cfa7bfb400 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0821 04:31:24.473911    5390 start.go:369] acquired machines lock for "bridge-797000" in 23.5µs
	I0821 04:31:24.473923    5390 start.go:93] Provisioning new machine with config: &{Name:bridge-797000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:
bridge-797000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0821 04:31:24.473978    5390 start.go:125] createHost starting for "" (driver="qemu2")
	I0821 04:31:24.478464    5390 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0821 04:31:24.493741    5390 start.go:159] libmachine.API.Create for "bridge-797000" (driver="qemu2")
	I0821 04:31:24.493767    5390 client.go:168] LocalClient.Create starting
	I0821 04:31:24.493822    5390 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca.pem
	I0821 04:31:24.493848    5390 main.go:141] libmachine: Decoding PEM data...
	I0821 04:31:24.493859    5390 main.go:141] libmachine: Parsing certificate...
	I0821 04:31:24.493895    5390 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17102-920/.minikube/certs/cert.pem
	I0821 04:31:24.493913    5390 main.go:141] libmachine: Decoding PEM data...
	I0821 04:31:24.493921    5390 main.go:141] libmachine: Parsing certificate...
	I0821 04:31:24.494216    5390 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17102-920/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17102-920/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0821 04:31:24.613490    5390 main.go:141] libmachine: Creating SSH key...
	I0821 04:31:24.727768    5390 main.go:141] libmachine: Creating Disk image...
	I0821 04:31:24.727774    5390 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0821 04:31:24.727911    5390 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17102-920/.minikube/machines/bridge-797000/disk.qcow2.raw /Users/jenkins/minikube-integration/17102-920/.minikube/machines/bridge-797000/disk.qcow2
	I0821 04:31:24.736656    5390 main.go:141] libmachine: STDOUT: 
	I0821 04:31:24.736668    5390 main.go:141] libmachine: STDERR: 
	I0821 04:31:24.736729    5390 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17102-920/.minikube/machines/bridge-797000/disk.qcow2 +20000M
	I0821 04:31:24.743836    5390 main.go:141] libmachine: STDOUT: Image resized.
	
	I0821 04:31:24.743848    5390 main.go:141] libmachine: STDERR: 
	I0821 04:31:24.743869    5390 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17102-920/.minikube/machines/bridge-797000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17102-920/.minikube/machines/bridge-797000/disk.qcow2
	I0821 04:31:24.743875    5390 main.go:141] libmachine: Starting QEMU VM...
	I0821 04:31:24.743926    5390 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17102-920/.minikube/machines/bridge-797000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/bridge-797000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17102-920/.minikube/machines/bridge-797000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:ef:cf:66:83:0a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17102-920/.minikube/machines/bridge-797000/disk.qcow2
	I0821 04:31:24.745408    5390 main.go:141] libmachine: STDOUT: 
	I0821 04:31:24.745418    5390 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0821 04:31:24.745438    5390 client.go:171] LocalClient.Create took 251.668166ms
	I0821 04:31:26.747571    5390 start.go:128] duration metric: createHost completed in 2.273620166s
	I0821 04:31:26.747626    5390 start.go:83] releasing machines lock for "bridge-797000", held for 2.273749333s
	W0821 04:31:26.747676    5390 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0821 04:31:26.760978    5390 out.go:177] * Deleting "bridge-797000" in qemu2 ...
	W0821 04:31:26.781810    5390 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0821 04:31:26.781842    5390 start.go:687] Will try again in 5 seconds ...
	I0821 04:31:31.783979    5390 start.go:365] acquiring machines lock for bridge-797000: {Name:mk9b32d9fe994be32d77812db464b2cfa7bfb400 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0821 04:31:31.784365    5390 start.go:369] acquired machines lock for "bridge-797000" in 305.542µs
	I0821 04:31:31.784489    5390 start.go:93] Provisioning new machine with config: &{Name:bridge-797000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:
bridge-797000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0821 04:31:31.784791    5390 start.go:125] createHost starting for "" (driver="qemu2")
	I0821 04:31:31.794493    5390 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0821 04:31:31.840822    5390 start.go:159] libmachine.API.Create for "bridge-797000" (driver="qemu2")
	I0821 04:31:31.840863    5390 client.go:168] LocalClient.Create starting
	I0821 04:31:31.840951    5390 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca.pem
	I0821 04:31:31.840996    5390 main.go:141] libmachine: Decoding PEM data...
	I0821 04:31:31.841013    5390 main.go:141] libmachine: Parsing certificate...
	I0821 04:31:31.841082    5390 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17102-920/.minikube/certs/cert.pem
	I0821 04:31:31.841116    5390 main.go:141] libmachine: Decoding PEM data...
	I0821 04:31:31.841130    5390 main.go:141] libmachine: Parsing certificate...
	I0821 04:31:31.841587    5390 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17102-920/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17102-920/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0821 04:31:31.974147    5390 main.go:141] libmachine: Creating SSH key...
	I0821 04:31:32.124350    5390 main.go:141] libmachine: Creating Disk image...
	I0821 04:31:32.124357    5390 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0821 04:31:32.124518    5390 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17102-920/.minikube/machines/bridge-797000/disk.qcow2.raw /Users/jenkins/minikube-integration/17102-920/.minikube/machines/bridge-797000/disk.qcow2
	I0821 04:31:32.133231    5390 main.go:141] libmachine: STDOUT: 
	I0821 04:31:32.133246    5390 main.go:141] libmachine: STDERR: 
	I0821 04:31:32.133317    5390 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17102-920/.minikube/machines/bridge-797000/disk.qcow2 +20000M
	I0821 04:31:32.140472    5390 main.go:141] libmachine: STDOUT: Image resized.
	
	I0821 04:31:32.140490    5390 main.go:141] libmachine: STDERR: 
	I0821 04:31:32.140504    5390 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17102-920/.minikube/machines/bridge-797000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17102-920/.minikube/machines/bridge-797000/disk.qcow2
	I0821 04:31:32.140511    5390 main.go:141] libmachine: Starting QEMU VM...
	I0821 04:31:32.140546    5390 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17102-920/.minikube/machines/bridge-797000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/bridge-797000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17102-920/.minikube/machines/bridge-797000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:3f:19:62:d9:d5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17102-920/.minikube/machines/bridge-797000/disk.qcow2
	I0821 04:31:32.142002    5390 main.go:141] libmachine: STDOUT: 
	I0821 04:31:32.142013    5390 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0821 04:31:32.142025    5390 client.go:171] LocalClient.Create took 301.158584ms
	I0821 04:31:34.144185    5390 start.go:128] duration metric: createHost completed in 2.359378583s
	I0821 04:31:34.144274    5390 start.go:83] releasing machines lock for "bridge-797000", held for 2.359931625s
	W0821 04:31:34.144678    5390 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p bridge-797000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-797000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0821 04:31:34.151294    5390 out.go:177] 
	W0821 04:31:34.156345    5390 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0821 04:31:34.156408    5390 out.go:239] * 
	* 
	W0821 04:31:34.158977    5390 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0821 04:31:34.167120    5390 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-797000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-797000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.70998675s)

                                                
                                                
-- stdout --
	* [kubenet-797000] minikube v1.31.2 on Darwin 13.5 (arm64)
	  - MINIKUBE_LOCATION=17102
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17102-920/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17102-920/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node kubenet-797000 in cluster kubenet-797000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-797000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0821 04:31:36.291331    5503 out.go:296] Setting OutFile to fd 1 ...
	I0821 04:31:36.291454    5503 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 04:31:36.291456    5503 out.go:309] Setting ErrFile to fd 2...
	I0821 04:31:36.291459    5503 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 04:31:36.291569    5503 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17102-920/.minikube/bin
	I0821 04:31:36.292546    5503 out.go:303] Setting JSON to false
	I0821 04:31:36.307829    5503 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3670,"bootTime":1692613826,"procs":416,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0821 04:31:36.307899    5503 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0821 04:31:36.312366    5503 out.go:177] * [kubenet-797000] minikube v1.31.2 on Darwin 13.5 (arm64)
	I0821 04:31:36.320310    5503 notify.go:220] Checking for updates...
	I0821 04:31:36.324162    5503 out.go:177]   - MINIKUBE_LOCATION=17102
	I0821 04:31:36.327293    5503 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17102-920/kubeconfig
	I0821 04:31:36.330316    5503 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0821 04:31:36.334137    5503 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0821 04:31:36.337343    5503 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17102-920/.minikube
	I0821 04:31:36.340334    5503 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0821 04:31:36.343578    5503 config.go:182] Loaded profile config "multinode-806000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0821 04:31:36.343629    5503 driver.go:373] Setting default libvirt URI to qemu:///system
	I0821 04:31:36.347299    5503 out.go:177] * Using the qemu2 driver based on user configuration
	I0821 04:31:36.354297    5503 start.go:298] selected driver: qemu2
	I0821 04:31:36.354302    5503 start.go:902] validating driver "qemu2" against <nil>
	I0821 04:31:36.354307    5503 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0821 04:31:36.356614    5503 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0821 04:31:36.360269    5503 out.go:177] * Automatically selected the socket_vmnet network
	I0821 04:31:36.363409    5503 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0821 04:31:36.363437    5503 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0821 04:31:36.363442    5503 start_flags.go:319] config:
	{Name:kubenet-797000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:kubenet-797000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin
:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0821 04:31:36.367533    5503 iso.go:125] acquiring lock: {Name:mk813ea611542195bb0511881888be3fabc72ff6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 04:31:36.371288    5503 out.go:177] * Starting control plane node kubenet-797000 in cluster kubenet-797000
	I0821 04:31:36.379348    5503 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime docker
	I0821 04:31:36.379383    5503 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17102-920/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4
	I0821 04:31:36.379400    5503 cache.go:57] Caching tarball of preloaded images
	I0821 04:31:36.379499    5503 preload.go:174] Found /Users/jenkins/minikube-integration/17102-920/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0821 04:31:36.379504    5503 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.4 on docker
	I0821 04:31:36.379576    5503 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/kubenet-797000/config.json ...
	I0821 04:31:36.379588    5503 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/kubenet-797000/config.json: {Name:mkbf48e9a5fff150a8d38c90d9a3c66cbeea5ca3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 04:31:36.379795    5503 start.go:365] acquiring machines lock for kubenet-797000: {Name:mk9b32d9fe994be32d77812db464b2cfa7bfb400 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0821 04:31:36.379823    5503 start.go:369] acquired machines lock for "kubenet-797000" in 22.375µs
	I0821 04:31:36.379834    5503 start.go:93] Provisioning new machine with config: &{Name:kubenet-797000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName
:kubenet-797000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0821 04:31:36.379873    5503 start.go:125] createHost starting for "" (driver="qemu2")
	I0821 04:31:36.387358    5503 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0821 04:31:36.402739    5503 start.go:159] libmachine.API.Create for "kubenet-797000" (driver="qemu2")
	I0821 04:31:36.402763    5503 client.go:168] LocalClient.Create starting
	I0821 04:31:36.402816    5503 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca.pem
	I0821 04:31:36.402847    5503 main.go:141] libmachine: Decoding PEM data...
	I0821 04:31:36.402859    5503 main.go:141] libmachine: Parsing certificate...
	I0821 04:31:36.402905    5503 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17102-920/.minikube/certs/cert.pem
	I0821 04:31:36.402922    5503 main.go:141] libmachine: Decoding PEM data...
	I0821 04:31:36.402931    5503 main.go:141] libmachine: Parsing certificate...
	I0821 04:31:36.403245    5503 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17102-920/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17102-920/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0821 04:31:36.523191    5503 main.go:141] libmachine: Creating SSH key...
	I0821 04:31:36.600731    5503 main.go:141] libmachine: Creating Disk image...
	I0821 04:31:36.600736    5503 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0821 04:31:36.600876    5503 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17102-920/.minikube/machines/kubenet-797000/disk.qcow2.raw /Users/jenkins/minikube-integration/17102-920/.minikube/machines/kubenet-797000/disk.qcow2
	I0821 04:31:36.609326    5503 main.go:141] libmachine: STDOUT: 
	I0821 04:31:36.609342    5503 main.go:141] libmachine: STDERR: 
	I0821 04:31:36.609402    5503 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17102-920/.minikube/machines/kubenet-797000/disk.qcow2 +20000M
	I0821 04:31:36.616526    5503 main.go:141] libmachine: STDOUT: Image resized.
	
	I0821 04:31:36.616547    5503 main.go:141] libmachine: STDERR: 
	I0821 04:31:36.616562    5503 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17102-920/.minikube/machines/kubenet-797000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17102-920/.minikube/machines/kubenet-797000/disk.qcow2
	I0821 04:31:36.616569    5503 main.go:141] libmachine: Starting QEMU VM...
	I0821 04:31:36.616606    5503 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17102-920/.minikube/machines/kubenet-797000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/kubenet-797000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17102-920/.minikube/machines/kubenet-797000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:65:96:bb:cd:23 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17102-920/.minikube/machines/kubenet-797000/disk.qcow2
	I0821 04:31:36.618106    5503 main.go:141] libmachine: STDOUT: 
	I0821 04:31:36.618120    5503 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0821 04:31:36.618141    5503 client.go:171] LocalClient.Create took 215.374334ms
	I0821 04:31:38.620266    5503 start.go:128] duration metric: createHost completed in 2.240419042s
	I0821 04:31:38.620332    5503 start.go:83] releasing machines lock for "kubenet-797000", held for 2.240542542s
	W0821 04:31:38.620420    5503 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0821 04:31:38.628704    5503 out.go:177] * Deleting "kubenet-797000" in qemu2 ...
	W0821 04:31:38.650898    5503 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0821 04:31:38.650941    5503 start.go:687] Will try again in 5 seconds ...
	I0821 04:31:43.653086    5503 start.go:365] acquiring machines lock for kubenet-797000: {Name:mk9b32d9fe994be32d77812db464b2cfa7bfb400 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0821 04:31:43.653449    5503 start.go:369] acquired machines lock for "kubenet-797000" in 292.334µs
	I0821 04:31:43.653579    5503 start.go:93] Provisioning new machine with config: &{Name:kubenet-797000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName
:kubenet-797000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0821 04:31:43.653816    5503 start.go:125] createHost starting for "" (driver="qemu2")
	I0821 04:31:43.662434    5503 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0821 04:31:43.709040    5503 start.go:159] libmachine.API.Create for "kubenet-797000" (driver="qemu2")
	I0821 04:31:43.709087    5503 client.go:168] LocalClient.Create starting
	I0821 04:31:43.709208    5503 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca.pem
	I0821 04:31:43.709261    5503 main.go:141] libmachine: Decoding PEM data...
	I0821 04:31:43.709280    5503 main.go:141] libmachine: Parsing certificate...
	I0821 04:31:43.709348    5503 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17102-920/.minikube/certs/cert.pem
	I0821 04:31:43.709406    5503 main.go:141] libmachine: Decoding PEM data...
	I0821 04:31:43.709423    5503 main.go:141] libmachine: Parsing certificate...
	I0821 04:31:43.709918    5503 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17102-920/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17102-920/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0821 04:31:43.842959    5503 main.go:141] libmachine: Creating SSH key...
	I0821 04:31:43.910294    5503 main.go:141] libmachine: Creating Disk image...
	I0821 04:31:43.910299    5503 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0821 04:31:43.910450    5503 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17102-920/.minikube/machines/kubenet-797000/disk.qcow2.raw /Users/jenkins/minikube-integration/17102-920/.minikube/machines/kubenet-797000/disk.qcow2
	I0821 04:31:43.918838    5503 main.go:141] libmachine: STDOUT: 
	I0821 04:31:43.918852    5503 main.go:141] libmachine: STDERR: 
	I0821 04:31:43.918915    5503 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17102-920/.minikube/machines/kubenet-797000/disk.qcow2 +20000M
	I0821 04:31:43.932677    5503 main.go:141] libmachine: STDOUT: Image resized.
	
	I0821 04:31:43.932691    5503 main.go:141] libmachine: STDERR: 
	I0821 04:31:43.932702    5503 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17102-920/.minikube/machines/kubenet-797000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17102-920/.minikube/machines/kubenet-797000/disk.qcow2
	I0821 04:31:43.932709    5503 main.go:141] libmachine: Starting QEMU VM...
	I0821 04:31:43.932745    5503 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17102-920/.minikube/machines/kubenet-797000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/kubenet-797000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17102-920/.minikube/machines/kubenet-797000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:4c:d6:85:c6:45 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17102-920/.minikube/machines/kubenet-797000/disk.qcow2
	I0821 04:31:43.934235    5503 main.go:141] libmachine: STDOUT: 
	I0821 04:31:43.934262    5503 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0821 04:31:43.934275    5503 client.go:171] LocalClient.Create took 225.186417ms
	I0821 04:31:45.936442    5503 start.go:128] duration metric: createHost completed in 2.282632167s
	I0821 04:31:45.936522    5503 start.go:83] releasing machines lock for "kubenet-797000", held for 2.283098334s
	W0821 04:31:45.937016    5503 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-797000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-797000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0821 04:31:45.941740    5503 out.go:177] 
	W0821 04:31:45.948815    5503 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0821 04:31:45.948842    5503 out.go:239] * 
	* 
	W0821 04:31:45.951390    5503 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0821 04:31:45.960580    5503 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.71s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (9.81s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-137000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0
E0821 04:31:55.585839    1362 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-137000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0: exit status 80 (9.741006041s)

                                                
                                                
-- stdout --
	* [old-k8s-version-137000] minikube v1.31.2 on Darwin 13.5 (arm64)
	  - MINIKUBE_LOCATION=17102
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17102-920/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17102-920/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node old-k8s-version-137000 in cluster old-k8s-version-137000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-137000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0821 04:31:48.104547    5616 out.go:296] Setting OutFile to fd 1 ...
	I0821 04:31:48.104660    5616 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 04:31:48.104663    5616 out.go:309] Setting ErrFile to fd 2...
	I0821 04:31:48.104665    5616 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 04:31:48.104773    5616 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17102-920/.minikube/bin
	I0821 04:31:48.105789    5616 out.go:303] Setting JSON to false
	I0821 04:31:48.120819    5616 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3682,"bootTime":1692613826,"procs":419,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0821 04:31:48.120885    5616 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0821 04:31:48.126402    5616 out.go:177] * [old-k8s-version-137000] minikube v1.31.2 on Darwin 13.5 (arm64)
	I0821 04:31:48.134416    5616 out.go:177]   - MINIKUBE_LOCATION=17102
	I0821 04:31:48.138378    5616 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17102-920/kubeconfig
	I0821 04:31:48.134515    5616 notify.go:220] Checking for updates...
	I0821 04:31:48.144382    5616 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0821 04:31:48.148356    5616 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0821 04:31:48.151414    5616 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17102-920/.minikube
	I0821 04:31:48.154365    5616 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0821 04:31:48.157581    5616 config.go:182] Loaded profile config "multinode-806000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0821 04:31:48.157626    5616 driver.go:373] Setting default libvirt URI to qemu:///system
	I0821 04:31:48.161327    5616 out.go:177] * Using the qemu2 driver based on user configuration
	I0821 04:31:48.168303    5616 start.go:298] selected driver: qemu2
	I0821 04:31:48.168310    5616 start.go:902] validating driver "qemu2" against <nil>
	I0821 04:31:48.168319    5616 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0821 04:31:48.170381    5616 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0821 04:31:48.174292    5616 out.go:177] * Automatically selected the socket_vmnet network
	I0821 04:31:48.177468    5616 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0821 04:31:48.177492    5616 cni.go:84] Creating CNI manager for ""
	I0821 04:31:48.177500    5616 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0821 04:31:48.177512    5616 start_flags.go:319] config:
	{Name:old-k8s-version-137000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-137000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket
: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0821 04:31:48.181886    5616 iso.go:125] acquiring lock: {Name:mk813ea611542195bb0511881888be3fabc72ff6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 04:31:48.190309    5616 out.go:177] * Starting control plane node old-k8s-version-137000 in cluster old-k8s-version-137000
	I0821 04:31:48.194315    5616 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0821 04:31:48.194341    5616 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17102-920/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0821 04:31:48.194352    5616 cache.go:57] Caching tarball of preloaded images
	I0821 04:31:48.194441    5616 preload.go:174] Found /Users/jenkins/minikube-integration/17102-920/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0821 04:31:48.194450    5616 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0821 04:31:48.194523    5616 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/old-k8s-version-137000/config.json ...
	I0821 04:31:48.194536    5616 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/old-k8s-version-137000/config.json: {Name:mkbcb7da87d2a9819c60cd7e2470cf2e1b219cf6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 04:31:48.194756    5616 start.go:365] acquiring machines lock for old-k8s-version-137000: {Name:mk9b32d9fe994be32d77812db464b2cfa7bfb400 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0821 04:31:48.194788    5616 start.go:369] acquired machines lock for "old-k8s-version-137000" in 26.25µs
	I0821 04:31:48.194801    5616 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-137000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 Clus
terName:old-k8s-version-137000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0821 04:31:48.194831    5616 start.go:125] createHost starting for "" (driver="qemu2")
	I0821 04:31:48.202328    5616 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0821 04:31:48.219168    5616 start.go:159] libmachine.API.Create for "old-k8s-version-137000" (driver="qemu2")
	I0821 04:31:48.219198    5616 client.go:168] LocalClient.Create starting
	I0821 04:31:48.219251    5616 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca.pem
	I0821 04:31:48.219277    5616 main.go:141] libmachine: Decoding PEM data...
	I0821 04:31:48.219293    5616 main.go:141] libmachine: Parsing certificate...
	I0821 04:31:48.219346    5616 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17102-920/.minikube/certs/cert.pem
	I0821 04:31:48.219365    5616 main.go:141] libmachine: Decoding PEM data...
	I0821 04:31:48.219376    5616 main.go:141] libmachine: Parsing certificate...
	I0821 04:31:48.219742    5616 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17102-920/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17102-920/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0821 04:31:48.341701    5616 main.go:141] libmachine: Creating SSH key...
	I0821 04:31:48.443215    5616 main.go:141] libmachine: Creating Disk image...
	I0821 04:31:48.443221    5616 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0821 04:31:48.443349    5616 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17102-920/.minikube/machines/old-k8s-version-137000/disk.qcow2.raw /Users/jenkins/minikube-integration/17102-920/.minikube/machines/old-k8s-version-137000/disk.qcow2
	I0821 04:31:48.452154    5616 main.go:141] libmachine: STDOUT: 
	I0821 04:31:48.452168    5616 main.go:141] libmachine: STDERR: 
	I0821 04:31:48.452220    5616 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17102-920/.minikube/machines/old-k8s-version-137000/disk.qcow2 +20000M
	I0821 04:31:48.459299    5616 main.go:141] libmachine: STDOUT: Image resized.
	
	I0821 04:31:48.459312    5616 main.go:141] libmachine: STDERR: 
	I0821 04:31:48.459330    5616 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17102-920/.minikube/machines/old-k8s-version-137000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17102-920/.minikube/machines/old-k8s-version-137000/disk.qcow2
	I0821 04:31:48.459336    5616 main.go:141] libmachine: Starting QEMU VM...
	I0821 04:31:48.459376    5616 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17102-920/.minikube/machines/old-k8s-version-137000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/old-k8s-version-137000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17102-920/.minikube/machines/old-k8s-version-137000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:94:2e:b8:a2:4a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17102-920/.minikube/machines/old-k8s-version-137000/disk.qcow2
	I0821 04:31:48.460910    5616 main.go:141] libmachine: STDOUT: 
	I0821 04:31:48.460927    5616 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0821 04:31:48.460949    5616 client.go:171] LocalClient.Create took 241.747541ms
	I0821 04:31:50.463069    5616 start.go:128] duration metric: createHost completed in 2.268265625s
	I0821 04:31:50.463139    5616 start.go:83] releasing machines lock for "old-k8s-version-137000", held for 2.268384459s
	W0821 04:31:50.463191    5616 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0821 04:31:50.469659    5616 out.go:177] * Deleting "old-k8s-version-137000" in qemu2 ...
	W0821 04:31:50.489107    5616 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0821 04:31:50.489136    5616 start.go:687] Will try again in 5 seconds ...
	I0821 04:31:55.491343    5616 start.go:365] acquiring machines lock for old-k8s-version-137000: {Name:mk9b32d9fe994be32d77812db464b2cfa7bfb400 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0821 04:31:55.491788    5616 start.go:369] acquired machines lock for "old-k8s-version-137000" in 330.959µs
	I0821 04:31:55.491917    5616 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-137000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 Clus
terName:old-k8s-version-137000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0821 04:31:55.492218    5616 start.go:125] createHost starting for "" (driver="qemu2")
	I0821 04:31:55.500881    5616 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0821 04:31:55.549371    5616 start.go:159] libmachine.API.Create for "old-k8s-version-137000" (driver="qemu2")
	I0821 04:31:55.549420    5616 client.go:168] LocalClient.Create starting
	I0821 04:31:55.549533    5616 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca.pem
	I0821 04:31:55.549594    5616 main.go:141] libmachine: Decoding PEM data...
	I0821 04:31:55.549610    5616 main.go:141] libmachine: Parsing certificate...
	I0821 04:31:55.549684    5616 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17102-920/.minikube/certs/cert.pem
	I0821 04:31:55.549724    5616 main.go:141] libmachine: Decoding PEM data...
	I0821 04:31:55.549739    5616 main.go:141] libmachine: Parsing certificate...
	I0821 04:31:55.550241    5616 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17102-920/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17102-920/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0821 04:31:55.683394    5616 main.go:141] libmachine: Creating SSH key...
	I0821 04:31:55.756079    5616 main.go:141] libmachine: Creating Disk image...
	I0821 04:31:55.756085    5616 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0821 04:31:55.756233    5616 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17102-920/.minikube/machines/old-k8s-version-137000/disk.qcow2.raw /Users/jenkins/minikube-integration/17102-920/.minikube/machines/old-k8s-version-137000/disk.qcow2
	I0821 04:31:55.764769    5616 main.go:141] libmachine: STDOUT: 
	I0821 04:31:55.764782    5616 main.go:141] libmachine: STDERR: 
	I0821 04:31:55.764834    5616 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17102-920/.minikube/machines/old-k8s-version-137000/disk.qcow2 +20000M
	I0821 04:31:55.771949    5616 main.go:141] libmachine: STDOUT: Image resized.
	
	I0821 04:31:55.771961    5616 main.go:141] libmachine: STDERR: 
	I0821 04:31:55.771981    5616 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17102-920/.minikube/machines/old-k8s-version-137000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17102-920/.minikube/machines/old-k8s-version-137000/disk.qcow2
	I0821 04:31:55.771986    5616 main.go:141] libmachine: Starting QEMU VM...
	I0821 04:31:55.772020    5616 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17102-920/.minikube/machines/old-k8s-version-137000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/old-k8s-version-137000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17102-920/.minikube/machines/old-k8s-version-137000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:f0:00:b4:b6:4a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17102-920/.minikube/machines/old-k8s-version-137000/disk.qcow2
	I0821 04:31:55.773469    5616 main.go:141] libmachine: STDOUT: 
	I0821 04:31:55.773487    5616 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0821 04:31:55.773498    5616 client.go:171] LocalClient.Create took 224.074292ms
	I0821 04:31:57.775608    5616 start.go:128] duration metric: createHost completed in 2.28341075s
	I0821 04:31:57.775675    5616 start.go:83] releasing machines lock for "old-k8s-version-137000", held for 2.283903292s
	W0821 04:31:57.776123    5616 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-137000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-137000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0821 04:31:57.787661    5616 out.go:177] 
	W0821 04:31:57.791965    5616 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0821 04:31:57.792034    5616 out.go:239] * 
	* 
	W0821 04:31:57.794753    5616 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0821 04:31:57.804822    5616 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-137000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-137000 -n old-k8s-version-137000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-137000 -n old-k8s-version-137000: exit status 7 (65.789041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-137000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (9.81s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-137000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-137000 create -f testdata/busybox.yaml: exit status 1 (28.437625ms)

                                                
                                                
** stderr ** 
	error: no openapi getter

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-137000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-137000 -n old-k8s-version-137000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-137000 -n old-k8s-version-137000: exit status 7 (28.654083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-137000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-137000 -n old-k8s-version-137000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-137000 -n old-k8s-version-137000: exit status 7 (28.658584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-137000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-137000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-137000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-137000 describe deploy/metrics-server -n kube-system: exit status 1 (24.940916ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-137000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-137000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-137000 -n old-k8s-version-137000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-137000 -n old-k8s-version-137000: exit status 7 (28.539125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-137000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-137000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-137000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0: exit status 80 (5.186600875s)

                                                
                                                
-- stdout --
	* [old-k8s-version-137000] minikube v1.31.2 on Darwin 13.5 (arm64)
	  - MINIKUBE_LOCATION=17102
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17102-920/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17102-920/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.27.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.4
	* Using the qemu2 driver based on existing profile
	* Starting control plane node old-k8s-version-137000 in cluster old-k8s-version-137000
	* Restarting existing qemu2 VM for "old-k8s-version-137000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-137000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0821 04:31:58.261940    5648 out.go:296] Setting OutFile to fd 1 ...
	I0821 04:31:58.262064    5648 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 04:31:58.262067    5648 out.go:309] Setting ErrFile to fd 2...
	I0821 04:31:58.262069    5648 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 04:31:58.262171    5648 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17102-920/.minikube/bin
	I0821 04:31:58.263164    5648 out.go:303] Setting JSON to false
	I0821 04:31:58.278267    5648 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3692,"bootTime":1692613826,"procs":417,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0821 04:31:58.278353    5648 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0821 04:31:58.282817    5648 out.go:177] * [old-k8s-version-137000] minikube v1.31.2 on Darwin 13.5 (arm64)
	I0821 04:31:58.289789    5648 out.go:177]   - MINIKUBE_LOCATION=17102
	I0821 04:31:58.289843    5648 notify.go:220] Checking for updates...
	I0821 04:31:58.293751    5648 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17102-920/kubeconfig
	I0821 04:31:58.297615    5648 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0821 04:31:58.300684    5648 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0821 04:31:58.303750    5648 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17102-920/.minikube
	I0821 04:31:58.306780    5648 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0821 04:31:58.310019    5648 config.go:182] Loaded profile config "old-k8s-version-137000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0821 04:31:58.313690    5648 out.go:177] * Kubernetes 1.27.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.4
	I0821 04:31:58.316740    5648 driver.go:373] Setting default libvirt URI to qemu:///system
	I0821 04:31:58.320752    5648 out.go:177] * Using the qemu2 driver based on existing profile
	I0821 04:31:58.327705    5648 start.go:298] selected driver: qemu2
	I0821 04:31:58.327709    5648 start.go:902] validating driver "qemu2" against &{Name:old-k8s-version-137000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 Cluster
Name:old-k8s-version-137000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2
6280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0821 04:31:58.327764    5648 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0821 04:31:58.329892    5648 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0821 04:31:58.329917    5648 cni.go:84] Creating CNI manager for ""
	I0821 04:31:58.329923    5648 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0821 04:31:58.329931    5648 start_flags.go:319] config:
	{Name:old-k8s-version-137000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-137000 Namespace:default APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0821 04:31:58.334220    5648 iso.go:125] acquiring lock: {Name:mk813ea611542195bb0511881888be3fabc72ff6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 04:31:58.338865    5648 out.go:177] * Starting control plane node old-k8s-version-137000 in cluster old-k8s-version-137000
	I0821 04:31:58.346690    5648 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0821 04:31:58.346717    5648 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17102-920/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0821 04:31:58.346741    5648 cache.go:57] Caching tarball of preloaded images
	I0821 04:31:58.346847    5648 preload.go:174] Found /Users/jenkins/minikube-integration/17102-920/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0821 04:31:58.346857    5648 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0821 04:31:58.346933    5648 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/old-k8s-version-137000/config.json ...
	I0821 04:31:58.347255    5648 start.go:365] acquiring machines lock for old-k8s-version-137000: {Name:mk9b32d9fe994be32d77812db464b2cfa7bfb400 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0821 04:31:58.347284    5648 start.go:369] acquired machines lock for "old-k8s-version-137000" in 22.708µs
	I0821 04:31:58.347293    5648 start.go:96] Skipping create...Using existing machine configuration
	I0821 04:31:58.347297    5648 fix.go:54] fixHost starting: 
	I0821 04:31:58.347409    5648 fix.go:102] recreateIfNeeded on old-k8s-version-137000: state=Stopped err=<nil>
	W0821 04:31:58.347417    5648 fix.go:128] unexpected machine state, will restart: <nil>
	I0821 04:31:58.355733    5648 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-137000" ...
	I0821 04:31:58.359705    5648 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17102-920/.minikube/machines/old-k8s-version-137000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/old-k8s-version-137000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17102-920/.minikube/machines/old-k8s-version-137000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:f0:00:b4:b6:4a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17102-920/.minikube/machines/old-k8s-version-137000/disk.qcow2
	I0821 04:31:58.361685    5648 main.go:141] libmachine: STDOUT: 
	I0821 04:31:58.361701    5648 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0821 04:31:58.361732    5648 fix.go:56] fixHost completed within 14.432958ms
	I0821 04:31:58.361739    5648 start.go:83] releasing machines lock for "old-k8s-version-137000", held for 14.451875ms
	W0821 04:31:58.361745    5648 start.go:672] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0821 04:31:58.361779    5648 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0821 04:31:58.361783    5648 start.go:687] Will try again in 5 seconds ...
	I0821 04:32:03.363883    5648 start.go:365] acquiring machines lock for old-k8s-version-137000: {Name:mk9b32d9fe994be32d77812db464b2cfa7bfb400 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0821 04:32:03.364274    5648 start.go:369] acquired machines lock for "old-k8s-version-137000" in 304.875µs
	I0821 04:32:03.364417    5648 start.go:96] Skipping create...Using existing machine configuration
	I0821 04:32:03.364435    5648 fix.go:54] fixHost starting: 
	I0821 04:32:03.365238    5648 fix.go:102] recreateIfNeeded on old-k8s-version-137000: state=Stopped err=<nil>
	W0821 04:32:03.365266    5648 fix.go:128] unexpected machine state, will restart: <nil>
	I0821 04:32:03.369556    5648 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-137000" ...
	I0821 04:32:03.377857    5648 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17102-920/.minikube/machines/old-k8s-version-137000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/old-k8s-version-137000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17102-920/.minikube/machines/old-k8s-version-137000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:f0:00:b4:b6:4a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17102-920/.minikube/machines/old-k8s-version-137000/disk.qcow2
	I0821 04:32:03.386714    5648 main.go:141] libmachine: STDOUT: 
	I0821 04:32:03.386764    5648 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0821 04:32:03.386857    5648 fix.go:56] fixHost completed within 22.422875ms
	I0821 04:32:03.386876    5648 start.go:83] releasing machines lock for "old-k8s-version-137000", held for 22.581667ms
	W0821 04:32:03.387066    5648 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-137000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-137000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0821 04:32:03.394755    5648 out.go:177] 
	W0821 04:32:03.397735    5648 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0821 04:32:03.397756    5648 out.go:239] * 
	* 
	W0821 04:32:03.400225    5648 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0821 04:32:03.408720    5648 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-137000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-137000 -n old-k8s-version-137000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-137000 -n old-k8s-version-137000: exit status 7 (67.704125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-137000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-137000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-137000 -n old-k8s-version-137000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-137000 -n old-k8s-version-137000: exit status 7 (31.728834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-137000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.05s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-137000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-137000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-137000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (25.801166ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-137000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-137000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-137000 -n old-k8s-version-137000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-137000 -n old-k8s-version-137000: exit status 7 (28.62325ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-137000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.05s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p old-k8s-version-137000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p old-k8s-version-137000 "sudo crictl images -o json": exit status 89 (38.351708ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p old-k8s-version-137000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p old-k8s-version-137000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p old-k8s-version-137000"
start_stop_delete_test.go:304: v1.16.0 images missing (-want +got):
  []string{
- 	"k8s.gcr.io/coredns:1.6.2",
- 	"k8s.gcr.io/etcd:3.3.15-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.16.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.16.0",
- 	"k8s.gcr.io/kube-proxy:v1.16.0",
- 	"k8s.gcr.io/kube-scheduler:v1.16.0",
- 	"k8s.gcr.io/pause:3.1",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-137000 -n old-k8s-version-137000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-137000 -n old-k8s-version-137000: exit status 7 (27.646792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-137000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-137000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-137000 --alsologtostderr -v=1: exit status 89 (41.840334ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p old-k8s-version-137000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0821 04:32:03.672311    5667 out.go:296] Setting OutFile to fd 1 ...
	I0821 04:32:03.672679    5667 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 04:32:03.672683    5667 out.go:309] Setting ErrFile to fd 2...
	I0821 04:32:03.672685    5667 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 04:32:03.672845    5667 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17102-920/.minikube/bin
	I0821 04:32:03.673049    5667 out.go:303] Setting JSON to false
	I0821 04:32:03.673057    5667 mustload.go:65] Loading cluster: old-k8s-version-137000
	I0821 04:32:03.673230    5667 config.go:182] Loaded profile config "old-k8s-version-137000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0821 04:32:03.677609    5667 out.go:177] * The control plane node must be running for this command
	I0821 04:32:03.681670    5667 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-137000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-137000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-137000 -n old-k8s-version-137000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-137000 -n old-k8s-version-137000: exit status 7 (28.728292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-137000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-137000 -n old-k8s-version-137000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-137000 -n old-k8s-version-137000: exit status 7 (28.912958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-137000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (9.83s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-776000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.28.0-rc.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-776000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.28.0-rc.1: exit status 80 (9.749859125s)

                                                
                                                
-- stdout --
	* [no-preload-776000] minikube v1.31.2 on Darwin 13.5 (arm64)
	  - MINIKUBE_LOCATION=17102
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17102-920/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17102-920/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node no-preload-776000 in cluster no-preload-776000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-776000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0821 04:32:04.135471    5690 out.go:296] Setting OutFile to fd 1 ...
	I0821 04:32:04.135587    5690 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 04:32:04.135589    5690 out.go:309] Setting ErrFile to fd 2...
	I0821 04:32:04.135592    5690 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 04:32:04.135710    5690 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17102-920/.minikube/bin
	I0821 04:32:04.136790    5690 out.go:303] Setting JSON to false
	I0821 04:32:04.151868    5690 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3698,"bootTime":1692613826,"procs":417,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0821 04:32:04.151947    5690 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0821 04:32:04.157095    5690 out.go:177] * [no-preload-776000] minikube v1.31.2 on Darwin 13.5 (arm64)
	I0821 04:32:04.163993    5690 out.go:177]   - MINIKUBE_LOCATION=17102
	I0821 04:32:04.164037    5690 notify.go:220] Checking for updates...
	I0821 04:32:04.167095    5690 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17102-920/kubeconfig
	I0821 04:32:04.171069    5690 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0821 04:32:04.173961    5690 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0821 04:32:04.177037    5690 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17102-920/.minikube
	I0821 04:32:04.180066    5690 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0821 04:32:04.183410    5690 config.go:182] Loaded profile config "multinode-806000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0821 04:32:04.183455    5690 driver.go:373] Setting default libvirt URI to qemu:///system
	I0821 04:32:04.187993    5690 out.go:177] * Using the qemu2 driver based on user configuration
	I0821 04:32:04.194958    5690 start.go:298] selected driver: qemu2
	I0821 04:32:04.194963    5690 start.go:902] validating driver "qemu2" against <nil>
	I0821 04:32:04.194968    5690 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0821 04:32:04.196839    5690 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0821 04:32:04.201037    5690 out.go:177] * Automatically selected the socket_vmnet network
	I0821 04:32:04.204099    5690 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0821 04:32:04.204121    5690 cni.go:84] Creating CNI manager for ""
	I0821 04:32:04.204128    5690 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0821 04:32:04.204132    5690 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0821 04:32:04.204138    5690 start_flags.go:319] config:
	{Name:no-preload-776000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0-rc.1 ClusterName:no-preload-776000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Ne
tworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0821 04:32:04.208212    5690 iso.go:125] acquiring lock: {Name:mk813ea611542195bb0511881888be3fabc72ff6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 04:32:04.216044    5690 out.go:177] * Starting control plane node no-preload-776000 in cluster no-preload-776000
	I0821 04:32:04.220028    5690 preload.go:132] Checking if preload exists for k8s version v1.28.0-rc.1 and runtime docker
	I0821 04:32:04.220128    5690 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/no-preload-776000/config.json ...
	I0821 04:32:04.220147    5690 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/no-preload-776000/config.json: {Name:mk46fd697badaf1b3715e7a99dcfd1b02b92f0fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 04:32:04.220149    5690 cache.go:107] acquiring lock: {Name:mk2c32575c8f9aa36e98dd49f399a8549ea6540f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 04:32:04.220153    5690 cache.go:107] acquiring lock: {Name:mk17cea0c1d1349315a99b95300fd3bc56df198e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 04:32:04.220167    5690 cache.go:107] acquiring lock: {Name:mk093194b09225e31ca3d4297f7dd696e7a766bf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 04:32:04.220217    5690 cache.go:115] /Users/jenkins/minikube-integration/17102-920/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0821 04:32:04.220226    5690 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/17102-920/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 82.417µs
	I0821 04:32:04.220233    5690 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/17102-920/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0821 04:32:04.220239    5690 cache.go:107] acquiring lock: {Name:mka97769706b184bdc4089846ee1cc643c3521ac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 04:32:04.220287    5690 cache.go:107] acquiring lock: {Name:mkfd79da71094fcd535414a576a5c187de48b7dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 04:32:04.220293    5690 cache.go:107] acquiring lock: {Name:mk04af64563544958980d82d48d80b1384f2143b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 04:32:04.220297    5690 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.28.0-rc.1
	I0821 04:32:04.220366    5690 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.28.0-rc.1
	I0821 04:32:04.220395    5690 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.28.0-rc.1
	I0821 04:32:04.220392    5690 cache.go:107] acquiring lock: {Name:mk33a32720655e2ee201e5833545e76fe7059bfd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 04:32:04.220383    5690 cache.go:107] acquiring lock: {Name:mk3a08a63fcb857c9b949e96deeb01896ac90ffb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 04:32:04.220449    5690 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.28.0-rc.1
	I0821 04:32:04.220406    5690 start.go:365] acquiring machines lock for no-preload-776000: {Name:mk9b32d9fe994be32d77812db464b2cfa7bfb400 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0821 04:32:04.220578    5690 start.go:369] acquired machines lock for "no-preload-776000" in 96.083µs
	I0821 04:32:04.220623    5690 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0821 04:32:04.220593    5690 start.go:93] Provisioning new machine with config: &{Name:no-preload-776000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0-rc.1 Clu
sterName:no-preload-776000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0821 04:32:04.220639    5690 start.go:125] createHost starting for "" (driver="qemu2")
	I0821 04:32:04.220639    5690 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.9-0
	I0821 04:32:04.220643    5690 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.10.1
	I0821 04:32:04.227983    5690 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0821 04:32:04.228849    5690 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.28.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.28.0-rc.1
	I0821 04:32:04.233381    5690 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.28.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.28.0-rc.1
	I0821 04:32:04.233815    5690 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.10.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.10.1
	I0821 04:32:04.234529    5690 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0821 04:32:04.234641    5690 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.9-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.9-0
	I0821 04:32:04.234734    5690 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.28.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.28.0-rc.1
	I0821 04:32:04.234761    5690 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.28.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.28.0-rc.1
	I0821 04:32:04.244213    5690 start.go:159] libmachine.API.Create for "no-preload-776000" (driver="qemu2")
	I0821 04:32:04.244225    5690 client.go:168] LocalClient.Create starting
	I0821 04:32:04.244294    5690 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca.pem
	I0821 04:32:04.244319    5690 main.go:141] libmachine: Decoding PEM data...
	I0821 04:32:04.244330    5690 main.go:141] libmachine: Parsing certificate...
	I0821 04:32:04.244377    5690 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17102-920/.minikube/certs/cert.pem
	I0821 04:32:04.244395    5690 main.go:141] libmachine: Decoding PEM data...
	I0821 04:32:04.244405    5690 main.go:141] libmachine: Parsing certificate...
	I0821 04:32:04.244738    5690 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17102-920/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17102-920/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0821 04:32:04.368394    5690 main.go:141] libmachine: Creating SSH key...
	I0821 04:32:04.462046    5690 main.go:141] libmachine: Creating Disk image...
	I0821 04:32:04.462063    5690 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0821 04:32:04.462244    5690 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17102-920/.minikube/machines/no-preload-776000/disk.qcow2.raw /Users/jenkins/minikube-integration/17102-920/.minikube/machines/no-preload-776000/disk.qcow2
	I0821 04:32:04.471047    5690 main.go:141] libmachine: STDOUT: 
	I0821 04:32:04.471066    5690 main.go:141] libmachine: STDERR: 
	I0821 04:32:04.471121    5690 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17102-920/.minikube/machines/no-preload-776000/disk.qcow2 +20000M
	I0821 04:32:04.479174    5690 main.go:141] libmachine: STDOUT: Image resized.
	
	I0821 04:32:04.479191    5690 main.go:141] libmachine: STDERR: 
	I0821 04:32:04.479209    5690 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17102-920/.minikube/machines/no-preload-776000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17102-920/.minikube/machines/no-preload-776000/disk.qcow2
	I0821 04:32:04.479214    5690 main.go:141] libmachine: Starting QEMU VM...
	I0821 04:32:04.479252    5690 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17102-920/.minikube/machines/no-preload-776000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/no-preload-776000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17102-920/.minikube/machines/no-preload-776000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:a9:0b:c1:9e:57 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17102-920/.minikube/machines/no-preload-776000/disk.qcow2
	I0821 04:32:04.480942    5690 main.go:141] libmachine: STDOUT: 
	I0821 04:32:04.480955    5690 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0821 04:32:04.480976    5690 client.go:171] LocalClient.Create took 236.74875ms
	I0821 04:32:04.799076    5690 cache.go:162] opening:  /Users/jenkins/minikube-integration/17102-920/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.0-rc.1
	I0821 04:32:04.935803    5690 cache.go:162] opening:  /Users/jenkins/minikube-integration/17102-920/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.0-rc.1
	I0821 04:32:05.044774    5690 cache.go:162] opening:  /Users/jenkins/minikube-integration/17102-920/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1
	I0821 04:32:05.249303    5690 cache.go:162] opening:  /Users/jenkins/minikube-integration/17102-920/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9
	I0821 04:32:05.371938    5690 cache.go:157] /Users/jenkins/minikube-integration/17102-920/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0821 04:32:05.371955    5690 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/17102-920/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 1.1516625s
	I0821 04:32:05.371962    5690 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/17102-920/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0821 04:32:05.450271    5690 cache.go:162] opening:  /Users/jenkins/minikube-integration/17102-920/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.9-0
	I0821 04:32:05.678447    5690 cache.go:162] opening:  /Users/jenkins/minikube-integration/17102-920/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.0-rc.1
	I0821 04:32:05.863188    5690 cache.go:162] opening:  /Users/jenkins/minikube-integration/17102-920/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.0-rc.1
	I0821 04:32:06.481096    5690 start.go:128] duration metric: createHost completed in 2.260476084s
	I0821 04:32:06.481137    5690 start.go:83] releasing machines lock for "no-preload-776000", held for 2.260592375s
	W0821 04:32:06.481201    5690 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0821 04:32:06.490339    5690 out.go:177] * Deleting "no-preload-776000" in qemu2 ...
	W0821 04:32:06.512175    5690 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0821 04:32:06.512213    5690 start.go:687] Will try again in 5 seconds ...
	I0821 04:32:07.255354    5690 cache.go:157] /Users/jenkins/minikube-integration/17102-920/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 exists
	I0821 04:32:07.255399    5690 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.10.1" -> "/Users/jenkins/minikube-integration/17102-920/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1" took 3.035212334s
	I0821 04:32:07.255424    5690 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.10.1 -> /Users/jenkins/minikube-integration/17102-920/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 succeeded
	I0821 04:32:07.647510    5690 cache.go:157] /Users/jenkins/minikube-integration/17102-920/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.0-rc.1 exists
	I0821 04:32:07.647571    5690 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.28.0-rc.1" -> "/Users/jenkins/minikube-integration/17102-920/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.0-rc.1" took 3.427368125s
	I0821 04:32:07.647609    5690 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.28.0-rc.1 -> /Users/jenkins/minikube-integration/17102-920/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.0-rc.1 succeeded
	I0821 04:32:08.430160    5690 cache.go:157] /Users/jenkins/minikube-integration/17102-920/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.0-rc.1 exists
	I0821 04:32:08.430208    5690 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.28.0-rc.1" -> "/Users/jenkins/minikube-integration/17102-920/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.0-rc.1" took 4.210134709s
	I0821 04:32:08.430238    5690 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.28.0-rc.1 -> /Users/jenkins/minikube-integration/17102-920/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.0-rc.1 succeeded
	I0821 04:32:09.583658    5690 cache.go:157] /Users/jenkins/minikube-integration/17102-920/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.0-rc.1 exists
	I0821 04:32:09.583669    5690 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.28.0-rc.1" -> "/Users/jenkins/minikube-integration/17102-920/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.0-rc.1" took 5.36360925s
	I0821 04:32:09.583684    5690 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.28.0-rc.1 -> /Users/jenkins/minikube-integration/17102-920/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.0-rc.1 succeeded
	I0821 04:32:10.108431    5690 cache.go:157] /Users/jenkins/minikube-integration/17102-920/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.0-rc.1 exists
	I0821 04:32:10.108470    5690 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.28.0-rc.1" -> "/Users/jenkins/minikube-integration/17102-920/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.0-rc.1" took 5.888284833s
	I0821 04:32:10.108496    5690 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.28.0-rc.1 -> /Users/jenkins/minikube-integration/17102-920/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.0-rc.1 succeeded
	I0821 04:32:11.512379    5690 start.go:365] acquiring machines lock for no-preload-776000: {Name:mk9b32d9fe994be32d77812db464b2cfa7bfb400 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0821 04:32:11.549289    5690 start.go:369] acquired machines lock for "no-preload-776000" in 36.839833ms
	I0821 04:32:11.549421    5690 start.go:93] Provisioning new machine with config: &{Name:no-preload-776000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0-rc.1 Clu
sterName:no-preload-776000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0821 04:32:11.549686    5690 start.go:125] createHost starting for "" (driver="qemu2")
	I0821 04:32:11.560234    5690 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0821 04:32:11.607960    5690 start.go:159] libmachine.API.Create for "no-preload-776000" (driver="qemu2")
	I0821 04:32:11.608007    5690 client.go:168] LocalClient.Create starting
	I0821 04:32:11.608140    5690 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca.pem
	I0821 04:32:11.608206    5690 main.go:141] libmachine: Decoding PEM data...
	I0821 04:32:11.608232    5690 main.go:141] libmachine: Parsing certificate...
	I0821 04:32:11.608338    5690 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17102-920/.minikube/certs/cert.pem
	I0821 04:32:11.608378    5690 main.go:141] libmachine: Decoding PEM data...
	I0821 04:32:11.608392    5690 main.go:141] libmachine: Parsing certificate...
	I0821 04:32:11.608889    5690 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17102-920/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17102-920/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0821 04:32:11.741294    5690 main.go:141] libmachine: Creating SSH key...
	I0821 04:32:11.794669    5690 main.go:141] libmachine: Creating Disk image...
	I0821 04:32:11.794674    5690 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0821 04:32:11.794805    5690 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17102-920/.minikube/machines/no-preload-776000/disk.qcow2.raw /Users/jenkins/minikube-integration/17102-920/.minikube/machines/no-preload-776000/disk.qcow2
	I0821 04:32:11.803347    5690 main.go:141] libmachine: STDOUT: 
	I0821 04:32:11.803361    5690 main.go:141] libmachine: STDERR: 
	I0821 04:32:11.803428    5690 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17102-920/.minikube/machines/no-preload-776000/disk.qcow2 +20000M
	I0821 04:32:11.810595    5690 main.go:141] libmachine: STDOUT: Image resized.
	
	I0821 04:32:11.810612    5690 main.go:141] libmachine: STDERR: 
	I0821 04:32:11.810629    5690 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17102-920/.minikube/machines/no-preload-776000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17102-920/.minikube/machines/no-preload-776000/disk.qcow2
	I0821 04:32:11.810641    5690 main.go:141] libmachine: Starting QEMU VM...
	I0821 04:32:11.810691    5690 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17102-920/.minikube/machines/no-preload-776000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/no-preload-776000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17102-920/.minikube/machines/no-preload-776000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:19:10:9f:d3:83 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17102-920/.minikube/machines/no-preload-776000/disk.qcow2
	I0821 04:32:11.812303    5690 main.go:141] libmachine: STDOUT: 
	I0821 04:32:11.812328    5690 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0821 04:32:11.812340    5690 client.go:171] LocalClient.Create took 204.329ms
	I0821 04:32:13.799284    5690 cache.go:157] /Users/jenkins/minikube-integration/17102-920/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.9-0 exists
	I0821 04:32:13.799333    5690 cache.go:96] cache image "registry.k8s.io/etcd:3.5.9-0" -> "/Users/jenkins/minikube-integration/17102-920/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.9-0" took 9.579197333s
	I0821 04:32:13.799367    5690 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.9-0 -> /Users/jenkins/minikube-integration/17102-920/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.9-0 succeeded
	I0821 04:32:13.799416    5690 cache.go:87] Successfully saved all images to host disk.
	I0821 04:32:13.814450    5690 start.go:128] duration metric: createHost completed in 2.264790417s
	I0821 04:32:13.814634    5690 start.go:83] releasing machines lock for "no-preload-776000", held for 2.26522475s
	W0821 04:32:13.814937    5690 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-776000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-776000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0821 04:32:13.825318    5690 out.go:177] 
	W0821 04:32:13.830474    5690 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0821 04:32:13.830489    5690 out.go:239] * 
	* 
	W0821 04:32:13.832175    5690 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0821 04:32:13.841505    5690 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-776000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.28.0-rc.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-776000 -n no-preload-776000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-776000 -n no-preload-776000: exit status 7 (73.634625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-776000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (9.83s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (3.15s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:195: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.2044584252.exe start -p stopped-upgrade-838000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:195: (dbg) Non-zero exit: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.2044584252.exe start -p stopped-upgrade-838000 --memory=2200 --vm-driver=qemu2 : fork/exec /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.2044584252.exe: permission denied (1.167458ms)
version_upgrade_test.go:195: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.2044584252.exe start -p stopped-upgrade-838000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:195: (dbg) Non-zero exit: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.2044584252.exe start -p stopped-upgrade-838000 --memory=2200 --vm-driver=qemu2 : fork/exec /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.2044584252.exe: permission denied (5.3655ms)
version_upgrade_test.go:195: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.2044584252.exe start -p stopped-upgrade-838000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:195: (dbg) Non-zero exit: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.2044584252.exe start -p stopped-upgrade-838000 --memory=2200 --vm-driver=qemu2 : fork/exec /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.2044584252.exe: permission denied (5.25775ms)
version_upgrade_test.go:201: legacy v1.6.2 start failed: fork/exec /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.2044584252.exe: permission denied
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (3.15s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.14s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-838000
version_upgrade_test.go:218: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p stopped-upgrade-838000: exit status 85 (141.766625ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-797000 sudo                                  | bridge-797000          | jenkins | v1.31.2 | 21 Aug 23 04:31 PDT |                     |
	|         | systemctl status cri-docker                            |                        |         |         |                     |                     |
	|         | --all --full --no-pager                                |                        |         |         |                     |                     |
	| ssh     | -p bridge-797000 sudo                                  | bridge-797000          | jenkins | v1.31.2 | 21 Aug 23 04:31 PDT |                     |
	|         | systemctl cat cri-docker                               |                        |         |         |                     |                     |
	|         | --no-pager                                             |                        |         |         |                     |                     |
	| ssh     | -p bridge-797000 sudo cat                              | bridge-797000          | jenkins | v1.31.2 | 21 Aug 23 04:31 PDT |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf   |                        |         |         |                     |                     |
	| ssh     | -p bridge-797000 sudo cat                              | bridge-797000          | jenkins | v1.31.2 | 21 Aug 23 04:31 PDT |                     |
	|         | /usr/lib/systemd/system/cri-docker.service             |                        |         |         |                     |                     |
	| ssh     | -p bridge-797000 sudo                                  | bridge-797000          | jenkins | v1.31.2 | 21 Aug 23 04:31 PDT |                     |
	|         | cri-dockerd --version                                  |                        |         |         |                     |                     |
	| ssh     | -p bridge-797000 sudo                                  | bridge-797000          | jenkins | v1.31.2 | 21 Aug 23 04:31 PDT |                     |
	|         | systemctl status containerd                            |                        |         |         |                     |                     |
	|         | --all --full --no-pager                                |                        |         |         |                     |                     |
	| ssh     | -p bridge-797000 sudo                                  | bridge-797000          | jenkins | v1.31.2 | 21 Aug 23 04:31 PDT |                     |
	|         | systemctl cat containerd                               |                        |         |         |                     |                     |
	|         | --no-pager                                             |                        |         |         |                     |                     |
	| ssh     | -p bridge-797000 sudo cat                              | bridge-797000          | jenkins | v1.31.2 | 21 Aug 23 04:31 PDT |                     |
	|         | /lib/systemd/system/containerd.service                 |                        |         |         |                     |                     |
	| ssh     | -p bridge-797000 sudo cat                              | bridge-797000          | jenkins | v1.31.2 | 21 Aug 23 04:31 PDT |                     |
	|         | /etc/containerd/config.toml                            |                        |         |         |                     |                     |
	| ssh     | -p bridge-797000 sudo                                  | bridge-797000          | jenkins | v1.31.2 | 21 Aug 23 04:31 PDT |                     |
	|         | containerd config dump                                 |                        |         |         |                     |                     |
	| ssh     | -p bridge-797000 sudo                                  | bridge-797000          | jenkins | v1.31.2 | 21 Aug 23 04:31 PDT |                     |
	|         | systemctl status crio --all                            |                        |         |         |                     |                     |
	|         | --full --no-pager                                      |                        |         |         |                     |                     |
	| ssh     | -p bridge-797000 sudo                                  | bridge-797000          | jenkins | v1.31.2 | 21 Aug 23 04:31 PDT |                     |
	|         | systemctl cat crio --no-pager                          |                        |         |         |                     |                     |
	| ssh     | -p bridge-797000 sudo find                             | bridge-797000          | jenkins | v1.31.2 | 21 Aug 23 04:31 PDT |                     |
	|         | /etc/crio -type f -exec sh -c                          |                        |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                        |         |         |                     |                     |
	| ssh     | -p bridge-797000 sudo crio                             | bridge-797000          | jenkins | v1.31.2 | 21 Aug 23 04:31 PDT |                     |
	|         | config                                                 |                        |         |         |                     |                     |
	| delete  | -p bridge-797000                                       | bridge-797000          | jenkins | v1.31.2 | 21 Aug 23 04:31 PDT | 21 Aug 23 04:31 PDT |
	| start   | -p kubenet-797000                                      | kubenet-797000         | jenkins | v1.31.2 | 21 Aug 23 04:31 PDT |                     |
	|         | --memory=3072                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                        |         |         |                     |                     |
	|         | --network-plugin=kubenet                               |                        |         |         |                     |                     |
	|         | --driver=qemu2                                         |                        |         |         |                     |                     |
	| ssh     | -p kubenet-797000 sudo cat                             | kubenet-797000         | jenkins | v1.31.2 | 21 Aug 23 04:31 PDT |                     |
	|         | /etc/nsswitch.conf                                     |                        |         |         |                     |                     |
	| ssh     | -p kubenet-797000 sudo cat                             | kubenet-797000         | jenkins | v1.31.2 | 21 Aug 23 04:31 PDT |                     |
	|         | /etc/hosts                                             |                        |         |         |                     |                     |
	| ssh     | -p kubenet-797000 sudo cat                             | kubenet-797000         | jenkins | v1.31.2 | 21 Aug 23 04:31 PDT |                     |
	|         | /etc/resolv.conf                                       |                        |         |         |                     |                     |
	| ssh     | -p kubenet-797000 sudo crictl                          | kubenet-797000         | jenkins | v1.31.2 | 21 Aug 23 04:31 PDT |                     |
	|         | pods                                                   |                        |         |         |                     |                     |
	| ssh     | -p kubenet-797000 sudo crictl                          | kubenet-797000         | jenkins | v1.31.2 | 21 Aug 23 04:31 PDT |                     |
	|         | ps --all                                               |                        |         |         |                     |                     |
	| ssh     | -p kubenet-797000 sudo find                            | kubenet-797000         | jenkins | v1.31.2 | 21 Aug 23 04:31 PDT |                     |
	|         | /etc/cni -type f -exec sh -c                           |                        |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                        |         |         |                     |                     |
	| ssh     | -p kubenet-797000 sudo ip a s                          | kubenet-797000         | jenkins | v1.31.2 | 21 Aug 23 04:31 PDT |                     |
	| ssh     | -p kubenet-797000 sudo ip r s                          | kubenet-797000         | jenkins | v1.31.2 | 21 Aug 23 04:31 PDT |                     |
	| ssh     | -p kubenet-797000 sudo                                 | kubenet-797000         | jenkins | v1.31.2 | 21 Aug 23 04:31 PDT |                     |
	|         | iptables-save                                          |                        |         |         |                     |                     |
	| ssh     | -p kubenet-797000 sudo                                 | kubenet-797000         | jenkins | v1.31.2 | 21 Aug 23 04:31 PDT |                     |
	|         | iptables -t nat -L -n -v                               |                        |         |         |                     |                     |
	| ssh     | -p kubenet-797000 sudo                                 | kubenet-797000         | jenkins | v1.31.2 | 21 Aug 23 04:31 PDT |                     |
	|         | systemctl status kubelet --all                         |                        |         |         |                     |                     |
	|         | --full --no-pager                                      |                        |         |         |                     |                     |
	| ssh     | -p kubenet-797000 sudo                                 | kubenet-797000         | jenkins | v1.31.2 | 21 Aug 23 04:31 PDT |                     |
	|         | systemctl cat kubelet                                  |                        |         |         |                     |                     |
	|         | --no-pager                                             |                        |         |         |                     |                     |
	| ssh     | -p kubenet-797000 sudo                                 | kubenet-797000         | jenkins | v1.31.2 | 21 Aug 23 04:31 PDT |                     |
	|         | journalctl -xeu kubelet --all                          |                        |         |         |                     |                     |
	|         | --full --no-pager                                      |                        |         |         |                     |                     |
	| ssh     | -p kubenet-797000 sudo cat                             | kubenet-797000         | jenkins | v1.31.2 | 21 Aug 23 04:31 PDT |                     |
	|         | /etc/kubernetes/kubelet.conf                           |                        |         |         |                     |                     |
	| ssh     | -p kubenet-797000 sudo cat                             | kubenet-797000         | jenkins | v1.31.2 | 21 Aug 23 04:31 PDT |                     |
	|         | /var/lib/kubelet/config.yaml                           |                        |         |         |                     |                     |
	| ssh     | -p kubenet-797000 sudo                                 | kubenet-797000         | jenkins | v1.31.2 | 21 Aug 23 04:31 PDT |                     |
	|         | systemctl status docker --all                          |                        |         |         |                     |                     |
	|         | --full --no-pager                                      |                        |         |         |                     |                     |
	| ssh     | -p kubenet-797000 sudo                                 | kubenet-797000         | jenkins | v1.31.2 | 21 Aug 23 04:31 PDT |                     |
	|         | systemctl cat docker                                   |                        |         |         |                     |                     |
	|         | --no-pager                                             |                        |         |         |                     |                     |
	| ssh     | -p kubenet-797000 sudo cat                             | kubenet-797000         | jenkins | v1.31.2 | 21 Aug 23 04:31 PDT |                     |
	|         | /etc/docker/daemon.json                                |                        |         |         |                     |                     |
	| ssh     | -p kubenet-797000 sudo docker                          | kubenet-797000         | jenkins | v1.31.2 | 21 Aug 23 04:31 PDT |                     |
	|         | system info                                            |                        |         |         |                     |                     |
	| ssh     | -p kubenet-797000 sudo                                 | kubenet-797000         | jenkins | v1.31.2 | 21 Aug 23 04:31 PDT |                     |
	|         | systemctl status cri-docker                            |                        |         |         |                     |                     |
	|         | --all --full --no-pager                                |                        |         |         |                     |                     |
	| ssh     | -p kubenet-797000 sudo                                 | kubenet-797000         | jenkins | v1.31.2 | 21 Aug 23 04:31 PDT |                     |
	|         | systemctl cat cri-docker                               |                        |         |         |                     |                     |
	|         | --no-pager                                             |                        |         |         |                     |                     |
	| ssh     | -p kubenet-797000 sudo cat                             | kubenet-797000         | jenkins | v1.31.2 | 21 Aug 23 04:31 PDT |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf   |                        |         |         |                     |                     |
	| ssh     | -p kubenet-797000 sudo cat                             | kubenet-797000         | jenkins | v1.31.2 | 21 Aug 23 04:31 PDT |                     |
	|         | /usr/lib/systemd/system/cri-docker.service             |                        |         |         |                     |                     |
	| ssh     | -p kubenet-797000 sudo                                 | kubenet-797000         | jenkins | v1.31.2 | 21 Aug 23 04:31 PDT |                     |
	|         | cri-dockerd --version                                  |                        |         |         |                     |                     |
	| ssh     | -p kubenet-797000 sudo                                 | kubenet-797000         | jenkins | v1.31.2 | 21 Aug 23 04:31 PDT |                     |
	|         | systemctl status containerd                            |                        |         |         |                     |                     |
	|         | --all --full --no-pager                                |                        |         |         |                     |                     |
	| ssh     | -p kubenet-797000 sudo                                 | kubenet-797000         | jenkins | v1.31.2 | 21 Aug 23 04:31 PDT |                     |
	|         | systemctl cat containerd                               |                        |         |         |                     |                     |
	|         | --no-pager                                             |                        |         |         |                     |                     |
	| ssh     | -p kubenet-797000 sudo cat                             | kubenet-797000         | jenkins | v1.31.2 | 21 Aug 23 04:31 PDT |                     |
	|         | /lib/systemd/system/containerd.service                 |                        |         |         |                     |                     |
	| ssh     | -p kubenet-797000 sudo cat                             | kubenet-797000         | jenkins | v1.31.2 | 21 Aug 23 04:31 PDT |                     |
	|         | /etc/containerd/config.toml                            |                        |         |         |                     |                     |
	| ssh     | -p kubenet-797000 sudo                                 | kubenet-797000         | jenkins | v1.31.2 | 21 Aug 23 04:31 PDT |                     |
	|         | containerd config dump                                 |                        |         |         |                     |                     |
	| ssh     | -p kubenet-797000 sudo                                 | kubenet-797000         | jenkins | v1.31.2 | 21 Aug 23 04:31 PDT |                     |
	|         | systemctl status crio --all                            |                        |         |         |                     |                     |
	|         | --full --no-pager                                      |                        |         |         |                     |                     |
	| ssh     | -p kubenet-797000 sudo                                 | kubenet-797000         | jenkins | v1.31.2 | 21 Aug 23 04:31 PDT |                     |
	|         | systemctl cat crio --no-pager                          |                        |         |         |                     |                     |
	| ssh     | -p kubenet-797000 sudo find                            | kubenet-797000         | jenkins | v1.31.2 | 21 Aug 23 04:31 PDT |                     |
	|         | /etc/crio -type f -exec sh -c                          |                        |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                        |         |         |                     |                     |
	| ssh     | -p kubenet-797000 sudo crio                            | kubenet-797000         | jenkins | v1.31.2 | 21 Aug 23 04:31 PDT |                     |
	|         | config                                                 |                        |         |         |                     |                     |
	| delete  | -p kubenet-797000                                      | kubenet-797000         | jenkins | v1.31.2 | 21 Aug 23 04:31 PDT | 21 Aug 23 04:31 PDT |
	| start   | -p old-k8s-version-137000                              | old-k8s-version-137000 | jenkins | v1.31.2 | 21 Aug 23 04:31 PDT |                     |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --kvm-network=default                                  |                        |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                        |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                        |         |         |                     |                     |
	|         | --keep-context=false                                   |                        |         |         |                     |                     |
	|         | --driver=qemu2                                         |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-137000        | old-k8s-version-137000 | jenkins | v1.31.2 | 21 Aug 23 04:31 PDT | 21 Aug 23 04:31 PDT |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                        |         |         |                     |                     |
	| stop    | -p old-k8s-version-137000                              | old-k8s-version-137000 | jenkins | v1.31.2 | 21 Aug 23 04:31 PDT | 21 Aug 23 04:31 PDT |
	|         | --alsologtostderr -v=3                                 |                        |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-137000             | old-k8s-version-137000 | jenkins | v1.31.2 | 21 Aug 23 04:31 PDT | 21 Aug 23 04:31 PDT |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                        |         |         |                     |                     |
	| start   | -p old-k8s-version-137000                              | old-k8s-version-137000 | jenkins | v1.31.2 | 21 Aug 23 04:31 PDT |                     |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --kvm-network=default                                  |                        |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                        |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                        |         |         |                     |                     |
	|         | --keep-context=false                                   |                        |         |         |                     |                     |
	|         | --driver=qemu2                                         |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                        |         |         |                     |                     |
	| ssh     | -p old-k8s-version-137000 sudo                         | old-k8s-version-137000 | jenkins | v1.31.2 | 21 Aug 23 04:32 PDT |                     |
	|         | crictl images -o json                                  |                        |         |         |                     |                     |
	| pause   | -p old-k8s-version-137000                              | old-k8s-version-137000 | jenkins | v1.31.2 | 21 Aug 23 04:32 PDT |                     |
	|         | --alsologtostderr -v=1                                 |                        |         |         |                     |                     |
	| delete  | -p old-k8s-version-137000                              | old-k8s-version-137000 | jenkins | v1.31.2 | 21 Aug 23 04:32 PDT | 21 Aug 23 04:32 PDT |
	| delete  | -p old-k8s-version-137000                              | old-k8s-version-137000 | jenkins | v1.31.2 | 21 Aug 23 04:32 PDT | 21 Aug 23 04:32 PDT |
	| start   | -p no-preload-776000                                   | no-preload-776000      | jenkins | v1.31.2 | 21 Aug 23 04:32 PDT |                     |
	|         | --memory=2200 --alsologtostderr                        |                        |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                        |         |         |                     |                     |
	|         | --driver=qemu2                                         |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.0-rc.1                      |                        |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/21 04:32:04
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.20.7 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0821 04:32:04.135471    5690 out.go:296] Setting OutFile to fd 1 ...
	I0821 04:32:04.135587    5690 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 04:32:04.135589    5690 out.go:309] Setting ErrFile to fd 2...
	I0821 04:32:04.135592    5690 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 04:32:04.135710    5690 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17102-920/.minikube/bin
	I0821 04:32:04.136790    5690 out.go:303] Setting JSON to false
	I0821 04:32:04.151868    5690 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3698,"bootTime":1692613826,"procs":417,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0821 04:32:04.151947    5690 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0821 04:32:04.157095    5690 out.go:177] * [no-preload-776000] minikube v1.31.2 on Darwin 13.5 (arm64)
	I0821 04:32:04.163993    5690 out.go:177]   - MINIKUBE_LOCATION=17102
	I0821 04:32:04.164037    5690 notify.go:220] Checking for updates...
	I0821 04:32:04.167095    5690 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17102-920/kubeconfig
	I0821 04:32:04.171069    5690 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0821 04:32:04.173961    5690 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0821 04:32:04.177037    5690 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17102-920/.minikube
	I0821 04:32:04.180066    5690 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0821 04:32:04.183410    5690 config.go:182] Loaded profile config "multinode-806000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0821 04:32:04.183455    5690 driver.go:373] Setting default libvirt URI to qemu:///system
	I0821 04:32:04.187993    5690 out.go:177] * Using the qemu2 driver based on user configuration
	I0821 04:32:04.194958    5690 start.go:298] selected driver: qemu2
	I0821 04:32:04.194963    5690 start.go:902] validating driver "qemu2" against <nil>
	I0821 04:32:04.194968    5690 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0821 04:32:04.196839    5690 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0821 04:32:04.201037    5690 out.go:177] * Automatically selected the socket_vmnet network
	I0821 04:32:04.204099    5690 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0821 04:32:04.204121    5690 cni.go:84] Creating CNI manager for ""
	I0821 04:32:04.204128    5690 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0821 04:32:04.204132    5690 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0821 04:32:04.204138    5690 start_flags.go:319] config:
	{Name:no-preload-776000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0-rc.1 ClusterName:no-preload-776000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Ne
tworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0821 04:32:04.208212    5690 iso.go:125] acquiring lock: {Name:mk813ea611542195bb0511881888be3fabc72ff6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 04:32:04.216044    5690 out.go:177] * Starting control plane node no-preload-776000 in cluster no-preload-776000
	I0821 04:32:04.220028    5690 preload.go:132] Checking if preload exists for k8s version v1.28.0-rc.1 and runtime docker
	I0821 04:32:04.220128    5690 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/no-preload-776000/config.json ...
	I0821 04:32:04.220147    5690 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/no-preload-776000/config.json: {Name:mk46fd697badaf1b3715e7a99dcfd1b02b92f0fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 04:32:04.220149    5690 cache.go:107] acquiring lock: {Name:mk2c32575c8f9aa36e98dd49f399a8549ea6540f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 04:32:04.220153    5690 cache.go:107] acquiring lock: {Name:mk17cea0c1d1349315a99b95300fd3bc56df198e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 04:32:04.220167    5690 cache.go:107] acquiring lock: {Name:mk093194b09225e31ca3d4297f7dd696e7a766bf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 04:32:04.220217    5690 cache.go:115] /Users/jenkins/minikube-integration/17102-920/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0821 04:32:04.220226    5690 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/17102-920/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 82.417µs
	I0821 04:32:04.220233    5690 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/17102-920/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0821 04:32:04.220239    5690 cache.go:107] acquiring lock: {Name:mka97769706b184bdc4089846ee1cc643c3521ac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 04:32:04.220287    5690 cache.go:107] acquiring lock: {Name:mkfd79da71094fcd535414a576a5c187de48b7dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 04:32:04.220293    5690 cache.go:107] acquiring lock: {Name:mk04af64563544958980d82d48d80b1384f2143b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 04:32:04.220297    5690 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.28.0-rc.1
	I0821 04:32:04.220366    5690 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.28.0-rc.1
	I0821 04:32:04.220395    5690 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.28.0-rc.1
	I0821 04:32:04.220392    5690 cache.go:107] acquiring lock: {Name:mk33a32720655e2ee201e5833545e76fe7059bfd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 04:32:04.220383    5690 cache.go:107] acquiring lock: {Name:mk3a08a63fcb857c9b949e96deeb01896ac90ffb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 04:32:04.220449    5690 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.28.0-rc.1
	I0821 04:32:04.220406    5690 start.go:365] acquiring machines lock for no-preload-776000: {Name:mk9b32d9fe994be32d77812db464b2cfa7bfb400 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0821 04:32:04.220578    5690 start.go:369] acquired machines lock for "no-preload-776000" in 96.083µs
	I0821 04:32:04.220623    5690 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0821 04:32:04.220593    5690 start.go:93] Provisioning new machine with config: &{Name:no-preload-776000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0-rc.1 Clu
sterName:no-preload-776000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0821 04:32:04.220639    5690 start.go:125] createHost starting for "" (driver="qemu2")
	I0821 04:32:04.220639    5690 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.9-0
	I0821 04:32:04.220643    5690 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.10.1
	I0821 04:32:04.227983    5690 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0821 04:32:04.228849    5690 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.28.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.28.0-rc.1
	I0821 04:32:04.233381    5690 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.28.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.28.0-rc.1
	I0821 04:32:04.233815    5690 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.10.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.10.1
	I0821 04:32:04.234529    5690 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0821 04:32:04.234641    5690 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.9-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.9-0
	I0821 04:32:04.234734    5690 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.28.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.28.0-rc.1
	I0821 04:32:04.234761    5690 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.28.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.28.0-rc.1
	I0821 04:32:04.244213    5690 start.go:159] libmachine.API.Create for "no-preload-776000" (driver="qemu2")
	I0821 04:32:04.244225    5690 client.go:168] LocalClient.Create starting
	I0821 04:32:04.244294    5690 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca.pem
	I0821 04:32:04.244319    5690 main.go:141] libmachine: Decoding PEM data...
	I0821 04:32:04.244330    5690 main.go:141] libmachine: Parsing certificate...
	I0821 04:32:04.244377    5690 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17102-920/.minikube/certs/cert.pem
	I0821 04:32:04.244395    5690 main.go:141] libmachine: Decoding PEM data...
	I0821 04:32:04.244405    5690 main.go:141] libmachine: Parsing certificate...
	I0821 04:32:04.244738    5690 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17102-920/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17102-920/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0821 04:32:04.368394    5690 main.go:141] libmachine: Creating SSH key...
	I0821 04:32:04.462046    5690 main.go:141] libmachine: Creating Disk image...
	I0821 04:32:04.462063    5690 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0821 04:32:04.462244    5690 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17102-920/.minikube/machines/no-preload-776000/disk.qcow2.raw /Users/jenkins/minikube-integration/17102-920/.minikube/machines/no-preload-776000/disk.qcow2
	I0821 04:32:04.471047    5690 main.go:141] libmachine: STDOUT: 
	I0821 04:32:04.471066    5690 main.go:141] libmachine: STDERR: 
	I0821 04:32:04.471121    5690 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17102-920/.minikube/machines/no-preload-776000/disk.qcow2 +20000M
	I0821 04:32:04.479174    5690 main.go:141] libmachine: STDOUT: Image resized.
	
	I0821 04:32:04.479191    5690 main.go:141] libmachine: STDERR: 
	I0821 04:32:04.479209    5690 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17102-920/.minikube/machines/no-preload-776000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17102-920/.minikube/machines/no-preload-776000/disk.qcow2
	I0821 04:32:04.479214    5690 main.go:141] libmachine: Starting QEMU VM...
	I0821 04:32:04.479252    5690 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17102-920/.minikube/machines/no-preload-776000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/no-preload-776000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17102-920/.minikube/machines/no-preload-776000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:a9:0b:c1:9e:57 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17102-920/.minikube/machines/no-preload-776000/disk.qcow2
	I0821 04:32:04.480942    5690 main.go:141] libmachine: STDOUT: 
	I0821 04:32:04.480955    5690 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0821 04:32:04.480976    5690 client.go:171] LocalClient.Create took 236.74875ms
	I0821 04:32:04.799076    5690 cache.go:162] opening:  /Users/jenkins/minikube-integration/17102-920/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.0-rc.1
	I0821 04:32:04.935803    5690 cache.go:162] opening:  /Users/jenkins/minikube-integration/17102-920/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.0-rc.1
	I0821 04:32:05.044774    5690 cache.go:162] opening:  /Users/jenkins/minikube-integration/17102-920/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1
	I0821 04:32:05.249303    5690 cache.go:162] opening:  /Users/jenkins/minikube-integration/17102-920/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9
	I0821 04:32:05.371938    5690 cache.go:157] /Users/jenkins/minikube-integration/17102-920/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0821 04:32:05.371955    5690 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/17102-920/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 1.1516625s
	I0821 04:32:05.371962    5690 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/17102-920/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0821 04:32:05.450271    5690 cache.go:162] opening:  /Users/jenkins/minikube-integration/17102-920/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.9-0
	I0821 04:32:05.678447    5690 cache.go:162] opening:  /Users/jenkins/minikube-integration/17102-920/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.0-rc.1
	I0821 04:32:05.863188    5690 cache.go:162] opening:  /Users/jenkins/minikube-integration/17102-920/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.0-rc.1
	I0821 04:32:06.481096    5690 start.go:128] duration metric: createHost completed in 2.260476084s
	I0821 04:32:06.481137    5690 start.go:83] releasing machines lock for "no-preload-776000", held for 2.260592375s
	W0821 04:32:06.481201    5690 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0821 04:32:06.490339    5690 out.go:177] * Deleting "no-preload-776000" in qemu2 ...
	
	* 
	* Profile "stopped-upgrade-838000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p stopped-upgrade-838000"

                                                
                                                
-- /stdout --
version_upgrade_test.go:220: `minikube logs` after upgrade to HEAD from v1.6.2 failed: exit status 85
--- FAIL: TestStoppedBinaryUpgrade/MinikubeLogs (0.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (9.82s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-644000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.27.4
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-644000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.27.4: exit status 80 (9.749798709s)

                                                
                                                
-- stdout --
	* [embed-certs-644000] minikube v1.31.2 on Darwin 13.5 (arm64)
	  - MINIKUBE_LOCATION=17102
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17102-920/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17102-920/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node embed-certs-644000 in cluster embed-certs-644000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-644000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0821 04:32:09.204221    5817 out.go:296] Setting OutFile to fd 1 ...
	I0821 04:32:09.204338    5817 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 04:32:09.204342    5817 out.go:309] Setting ErrFile to fd 2...
	I0821 04:32:09.204344    5817 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 04:32:09.204449    5817 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17102-920/.minikube/bin
	I0821 04:32:09.205527    5817 out.go:303] Setting JSON to false
	I0821 04:32:09.220622    5817 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3703,"bootTime":1692613826,"procs":418,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0821 04:32:09.220734    5817 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0821 04:32:09.223866    5817 out.go:177] * [embed-certs-644000] minikube v1.31.2 on Darwin 13.5 (arm64)
	I0821 04:32:09.234826    5817 out.go:177]   - MINIKUBE_LOCATION=17102
	I0821 04:32:09.230957    5817 notify.go:220] Checking for updates...
	I0821 04:32:09.241813    5817 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17102-920/kubeconfig
	I0821 04:32:09.249846    5817 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0821 04:32:09.256870    5817 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0821 04:32:09.263772    5817 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17102-920/.minikube
	I0821 04:32:09.271647    5817 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0821 04:32:09.275447    5817 config.go:182] Loaded profile config "multinode-806000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0821 04:32:09.275508    5817 config.go:182] Loaded profile config "no-preload-776000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.0-rc.1
	I0821 04:32:09.275549    5817 driver.go:373] Setting default libvirt URI to qemu:///system
	I0821 04:32:09.278891    5817 out.go:177] * Using the qemu2 driver based on user configuration
	I0821 04:32:09.285788    5817 start.go:298] selected driver: qemu2
	I0821 04:32:09.285792    5817 start.go:902] validating driver "qemu2" against <nil>
	I0821 04:32:09.285805    5817 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0821 04:32:09.287824    5817 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0821 04:32:09.291898    5817 out.go:177] * Automatically selected the socket_vmnet network
	I0821 04:32:09.295968    5817 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0821 04:32:09.296001    5817 cni.go:84] Creating CNI manager for ""
	I0821 04:32:09.296010    5817 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0821 04:32:09.296014    5817 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0821 04:32:09.296021    5817 start_flags.go:319] config:
	{Name:embed-certs-644000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:embed-certs-644000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Networ
kPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0821 04:32:09.300210    5817 iso.go:125] acquiring lock: {Name:mk813ea611542195bb0511881888be3fabc72ff6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 04:32:09.307844    5817 out.go:177] * Starting control plane node embed-certs-644000 in cluster embed-certs-644000
	I0821 04:32:09.311877    5817 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime docker
	I0821 04:32:09.311898    5817 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17102-920/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4
	I0821 04:32:09.311916    5817 cache.go:57] Caching tarball of preloaded images
	I0821 04:32:09.311993    5817 preload.go:174] Found /Users/jenkins/minikube-integration/17102-920/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0821 04:32:09.312001    5817 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.4 on docker
	I0821 04:32:09.312069    5817 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/embed-certs-644000/config.json ...
	I0821 04:32:09.312082    5817 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/embed-certs-644000/config.json: {Name:mk1a5989a391f2042f6a9ee57bd32298469b11aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 04:32:09.312277    5817 start.go:365] acquiring machines lock for embed-certs-644000: {Name:mk9b32d9fe994be32d77812db464b2cfa7bfb400 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0821 04:32:09.312309    5817 start.go:369] acquired machines lock for "embed-certs-644000" in 25.959µs
	I0821 04:32:09.312321    5817 start.go:93] Provisioning new machine with config: &{Name:embed-certs-644000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterN
ame:embed-certs-644000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0821 04:32:09.312352    5817 start.go:125] createHost starting for "" (driver="qemu2")
	I0821 04:32:09.315818    5817 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0821 04:32:09.331984    5817 start.go:159] libmachine.API.Create for "embed-certs-644000" (driver="qemu2")
	I0821 04:32:09.332005    5817 client.go:168] LocalClient.Create starting
	I0821 04:32:09.332056    5817 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca.pem
	I0821 04:32:09.332081    5817 main.go:141] libmachine: Decoding PEM data...
	I0821 04:32:09.332091    5817 main.go:141] libmachine: Parsing certificate...
	I0821 04:32:09.332135    5817 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17102-920/.minikube/certs/cert.pem
	I0821 04:32:09.332153    5817 main.go:141] libmachine: Decoding PEM data...
	I0821 04:32:09.332169    5817 main.go:141] libmachine: Parsing certificate...
	I0821 04:32:09.332469    5817 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17102-920/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17102-920/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0821 04:32:09.451268    5817 main.go:141] libmachine: Creating SSH key...
	I0821 04:32:09.528673    5817 main.go:141] libmachine: Creating Disk image...
	I0821 04:32:09.528681    5817 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0821 04:32:09.528837    5817 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17102-920/.minikube/machines/embed-certs-644000/disk.qcow2.raw /Users/jenkins/minikube-integration/17102-920/.minikube/machines/embed-certs-644000/disk.qcow2
	I0821 04:32:09.537809    5817 main.go:141] libmachine: STDOUT: 
	I0821 04:32:09.537829    5817 main.go:141] libmachine: STDERR: 
	I0821 04:32:09.537900    5817 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17102-920/.minikube/machines/embed-certs-644000/disk.qcow2 +20000M
	I0821 04:32:09.545317    5817 main.go:141] libmachine: STDOUT: Image resized.
	
	I0821 04:32:09.545330    5817 main.go:141] libmachine: STDERR: 
	I0821 04:32:09.545348    5817 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17102-920/.minikube/machines/embed-certs-644000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17102-920/.minikube/machines/embed-certs-644000/disk.qcow2
	I0821 04:32:09.545364    5817 main.go:141] libmachine: Starting QEMU VM...
	I0821 04:32:09.545410    5817 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17102-920/.minikube/machines/embed-certs-644000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/embed-certs-644000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17102-920/.minikube/machines/embed-certs-644000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:21:ea:f4:2f:80 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17102-920/.minikube/machines/embed-certs-644000/disk.qcow2
	I0821 04:32:09.546964    5817 main.go:141] libmachine: STDOUT: 
	I0821 04:32:09.546977    5817 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0821 04:32:09.547001    5817 client.go:171] LocalClient.Create took 214.992417ms
	I0821 04:32:11.549147    5817 start.go:128] duration metric: createHost completed in 2.236818208s
	I0821 04:32:11.549199    5817 start.go:83] releasing machines lock for "embed-certs-644000", held for 2.236924125s
	W0821 04:32:11.549252    5817 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0821 04:32:11.567242    5817 out.go:177] * Deleting "embed-certs-644000" in qemu2 ...
	W0821 04:32:11.584441    5817 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0821 04:32:11.584470    5817 start.go:687] Will try again in 5 seconds ...
	I0821 04:32:16.586604    5817 start.go:365] acquiring machines lock for embed-certs-644000: {Name:mk9b32d9fe994be32d77812db464b2cfa7bfb400 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0821 04:32:16.587061    5817 start.go:369] acquired machines lock for "embed-certs-644000" in 327.458µs
	I0821 04:32:16.587187    5817 start.go:93] Provisioning new machine with config: &{Name:embed-certs-644000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterN
ame:embed-certs-644000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0821 04:32:16.587473    5817 start.go:125] createHost starting for "" (driver="qemu2")
	I0821 04:32:16.593116    5817 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0821 04:32:16.639086    5817 start.go:159] libmachine.API.Create for "embed-certs-644000" (driver="qemu2")
	I0821 04:32:16.639145    5817 client.go:168] LocalClient.Create starting
	I0821 04:32:16.639272    5817 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca.pem
	I0821 04:32:16.639314    5817 main.go:141] libmachine: Decoding PEM data...
	I0821 04:32:16.639330    5817 main.go:141] libmachine: Parsing certificate...
	I0821 04:32:16.639397    5817 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17102-920/.minikube/certs/cert.pem
	I0821 04:32:16.639428    5817 main.go:141] libmachine: Decoding PEM data...
	I0821 04:32:16.639443    5817 main.go:141] libmachine: Parsing certificate...
	I0821 04:32:16.639946    5817 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17102-920/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17102-920/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0821 04:32:16.773146    5817 main.go:141] libmachine: Creating SSH key...
	I0821 04:32:16.864183    5817 main.go:141] libmachine: Creating Disk image...
	I0821 04:32:16.864192    5817 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0821 04:32:16.864333    5817 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17102-920/.minikube/machines/embed-certs-644000/disk.qcow2.raw /Users/jenkins/minikube-integration/17102-920/.minikube/machines/embed-certs-644000/disk.qcow2
	I0821 04:32:16.873042    5817 main.go:141] libmachine: STDOUT: 
	I0821 04:32:16.873055    5817 main.go:141] libmachine: STDERR: 
	I0821 04:32:16.873100    5817 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17102-920/.minikube/machines/embed-certs-644000/disk.qcow2 +20000M
	I0821 04:32:16.880287    5817 main.go:141] libmachine: STDOUT: Image resized.
	
	I0821 04:32:16.880300    5817 main.go:141] libmachine: STDERR: 
	I0821 04:32:16.880312    5817 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17102-920/.minikube/machines/embed-certs-644000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17102-920/.minikube/machines/embed-certs-644000/disk.qcow2
	I0821 04:32:16.880318    5817 main.go:141] libmachine: Starting QEMU VM...
	I0821 04:32:16.880362    5817 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17102-920/.minikube/machines/embed-certs-644000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/embed-certs-644000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17102-920/.minikube/machines/embed-certs-644000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:4b:8d:66:35:b5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17102-920/.minikube/machines/embed-certs-644000/disk.qcow2
	I0821 04:32:16.881938    5817 main.go:141] libmachine: STDOUT: 
	I0821 04:32:16.881950    5817 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0821 04:32:16.881962    5817 client.go:171] LocalClient.Create took 242.814083ms
	I0821 04:32:18.884084    5817 start.go:128] duration metric: createHost completed in 2.296627209s
	I0821 04:32:18.884151    5817 start.go:83] releasing machines lock for "embed-certs-644000", held for 2.297109792s
	W0821 04:32:18.884515    5817 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-644000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-644000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0821 04:32:18.894028    5817 out.go:177] 
	W0821 04:32:18.898108    5817 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0821 04:32:18.898149    5817 out.go:239] * 
	* 
	W0821 04:32:18.900719    5817 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0821 04:32:18.910070    5817 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-644000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.27.4": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-644000 -n embed-certs-644000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-644000 -n embed-certs-644000: exit status 7 (70.121375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-644000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (9.82s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-776000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-776000 create -f testdata/busybox.yaml: exit status 1 (32.065167ms)

                                                
                                                
** stderr ** 
	error: no openapi getter

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-776000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-776000 -n no-preload-776000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-776000 -n no-preload-776000: exit status 7 (27.80625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-776000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-776000 -n no-preload-776000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-776000 -n no-preload-776000: exit status 7 (28.099166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-776000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-776000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-776000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-776000 describe deploy/metrics-server -n kube-system: exit status 1 (25.062459ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-776000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-776000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-776000 -n no-preload-776000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-776000 -n no-preload-776000: exit status 7 (27.83725ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-776000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-776000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.28.0-rc.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-776000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.28.0-rc.1: exit status 80 (5.168604042s)

                                                
                                                
-- stdout --
	* [no-preload-776000] minikube v1.31.2 on Darwin 13.5 (arm64)
	  - MINIKUBE_LOCATION=17102
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17102-920/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17102-920/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node no-preload-776000 in cluster no-preload-776000
	* Restarting existing qemu2 VM for "no-preload-776000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-776000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0821 04:32:14.308881    5849 out.go:296] Setting OutFile to fd 1 ...
	I0821 04:32:14.309001    5849 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 04:32:14.309004    5849 out.go:309] Setting ErrFile to fd 2...
	I0821 04:32:14.309006    5849 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 04:32:14.309113    5849 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17102-920/.minikube/bin
	I0821 04:32:14.310043    5849 out.go:303] Setting JSON to false
	I0821 04:32:14.324985    5849 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3708,"bootTime":1692613826,"procs":418,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0821 04:32:14.325056    5849 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0821 04:32:14.329723    5849 out.go:177] * [no-preload-776000] minikube v1.31.2 on Darwin 13.5 (arm64)
	I0821 04:32:14.336730    5849 out.go:177]   - MINIKUBE_LOCATION=17102
	I0821 04:32:14.336798    5849 notify.go:220] Checking for updates...
	I0821 04:32:14.339738    5849 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17102-920/kubeconfig
	I0821 04:32:14.343784    5849 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0821 04:32:14.346692    5849 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0821 04:32:14.349779    5849 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17102-920/.minikube
	I0821 04:32:14.352771    5849 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0821 04:32:14.356372    5849 config.go:182] Loaded profile config "no-preload-776000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.0-rc.1
	I0821 04:32:14.356854    5849 driver.go:373] Setting default libvirt URI to qemu:///system
	I0821 04:32:14.359732    5849 out.go:177] * Using the qemu2 driver based on existing profile
	I0821 04:32:14.366701    5849 start.go:298] selected driver: qemu2
	I0821 04:32:14.366716    5849 start.go:902] validating driver "qemu2" against &{Name:no-preload-776000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0-rc.1 Cluste
rName:no-preload-776000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0821 04:32:14.366776    5849 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0821 04:32:14.368824    5849 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0821 04:32:14.368850    5849 cni.go:84] Creating CNI manager for ""
	I0821 04:32:14.368857    5849 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0821 04:32:14.368868    5849 start_flags.go:319] config:
	{Name:no-preload-776000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0-rc.1 ClusterName:no-preload-776000 Namespace:default APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0821 04:32:14.372772    5849 iso.go:125] acquiring lock: {Name:mk813ea611542195bb0511881888be3fabc72ff6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 04:32:14.379697    5849 out.go:177] * Starting control plane node no-preload-776000 in cluster no-preload-776000
	I0821 04:32:14.383719    5849 preload.go:132] Checking if preload exists for k8s version v1.28.0-rc.1 and runtime docker
	I0821 04:32:14.383821    5849 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/no-preload-776000/config.json ...
	I0821 04:32:14.383830    5849 cache.go:107] acquiring lock: {Name:mk2c32575c8f9aa36e98dd49f399a8549ea6540f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 04:32:14.383832    5849 cache.go:107] acquiring lock: {Name:mkfd79da71094fcd535414a576a5c187de48b7dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 04:32:14.383826    5849 cache.go:107] acquiring lock: {Name:mk04af64563544958980d82d48d80b1384f2143b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 04:32:14.383860    5849 cache.go:107] acquiring lock: {Name:mk093194b09225e31ca3d4297f7dd696e7a766bf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 04:32:14.383891    5849 cache.go:115] /Users/jenkins/minikube-integration/17102-920/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0821 04:32:14.383899    5849 cache.go:115] /Users/jenkins/minikube-integration/17102-920/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.0-rc.1 exists
	I0821 04:32:14.383900    5849 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/17102-920/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 70.5µs
	I0821 04:32:14.383907    5849 cache.go:115] /Users/jenkins/minikube-integration/17102-920/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.0-rc.1 exists
	I0821 04:32:14.383913    5849 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/17102-920/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0821 04:32:14.383903    5849 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.28.0-rc.1" -> "/Users/jenkins/minikube-integration/17102-920/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.0-rc.1" took 85.125µs
	I0821 04:32:14.383921    5849 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.28.0-rc.1 -> /Users/jenkins/minikube-integration/17102-920/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.0-rc.1 succeeded
	I0821 04:32:14.383917    5849 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.28.0-rc.1" -> "/Users/jenkins/minikube-integration/17102-920/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.0-rc.1" took 57µs
	I0821 04:32:14.383926    5849 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.28.0-rc.1 -> /Users/jenkins/minikube-integration/17102-920/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.0-rc.1 succeeded
	I0821 04:32:14.383929    5849 cache.go:107] acquiring lock: {Name:mk33a32720655e2ee201e5833545e76fe7059bfd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 04:32:14.383931    5849 cache.go:107] acquiring lock: {Name:mka97769706b184bdc4089846ee1cc643c3521ac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 04:32:14.383965    5849 cache.go:115] /Users/jenkins/minikube-integration/17102-920/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.9-0 exists
	I0821 04:32:14.383951    5849 cache.go:107] acquiring lock: {Name:mk17cea0c1d1349315a99b95300fd3bc56df198e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 04:32:14.383969    5849 cache.go:96] cache image "registry.k8s.io/etcd:3.5.9-0" -> "/Users/jenkins/minikube-integration/17102-920/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.9-0" took 40.75µs
	I0821 04:32:14.383973    5849 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.9-0 -> /Users/jenkins/minikube-integration/17102-920/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.9-0 succeeded
	I0821 04:32:14.383942    5849 cache.go:115] /Users/jenkins/minikube-integration/17102-920/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.0-rc.1 exists
	I0821 04:32:14.383977    5849 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.28.0-rc.1" -> "/Users/jenkins/minikube-integration/17102-920/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.0-rc.1" took 161.917µs
	I0821 04:32:14.383980    5849 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.28.0-rc.1 -> /Users/jenkins/minikube-integration/17102-920/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.0-rc.1 succeeded
	I0821 04:32:14.383980    5849 cache.go:115] /Users/jenkins/minikube-integration/17102-920/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 exists
	I0821 04:32:14.383984    5849 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.10.1" -> "/Users/jenkins/minikube-integration/17102-920/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1" took 53.209µs
	I0821 04:32:14.383988    5849 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.10.1 -> /Users/jenkins/minikube-integration/17102-920/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 succeeded
	I0821 04:32:14.383997    5849 cache.go:115] /Users/jenkins/minikube-integration/17102-920/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.0-rc.1 exists
	I0821 04:32:14.384001    5849 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.28.0-rc.1" -> "/Users/jenkins/minikube-integration/17102-920/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.0-rc.1" took 54.583µs
	I0821 04:32:14.384005    5849 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.28.0-rc.1 -> /Users/jenkins/minikube-integration/17102-920/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.0-rc.1 succeeded
	I0821 04:32:14.384011    5849 cache.go:107] acquiring lock: {Name:mk3a08a63fcb857c9b949e96deeb01896ac90ffb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 04:32:14.384061    5849 cache.go:115] /Users/jenkins/minikube-integration/17102-920/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0821 04:32:14.384065    5849 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/17102-920/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 93.292µs
	I0821 04:32:14.384069    5849 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/17102-920/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0821 04:32:14.384073    5849 cache.go:87] Successfully saved all images to host disk.
	I0821 04:32:14.384142    5849 start.go:365] acquiring machines lock for no-preload-776000: {Name:mk9b32d9fe994be32d77812db464b2cfa7bfb400 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0821 04:32:14.384176    5849 start.go:369] acquired machines lock for "no-preload-776000" in 27.875µs
	I0821 04:32:14.384185    5849 start.go:96] Skipping create...Using existing machine configuration
	I0821 04:32:14.384188    5849 fix.go:54] fixHost starting: 
	I0821 04:32:14.384301    5849 fix.go:102] recreateIfNeeded on no-preload-776000: state=Stopped err=<nil>
	W0821 04:32:14.384310    5849 fix.go:128] unexpected machine state, will restart: <nil>
	I0821 04:32:14.391723    5849 out.go:177] * Restarting existing qemu2 VM for "no-preload-776000" ...
	I0821 04:32:14.395730    5849 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17102-920/.minikube/machines/no-preload-776000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/no-preload-776000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17102-920/.minikube/machines/no-preload-776000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:19:10:9f:d3:83 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17102-920/.minikube/machines/no-preload-776000/disk.qcow2
	I0821 04:32:14.397598    5849 main.go:141] libmachine: STDOUT: 
	I0821 04:32:14.397613    5849 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0821 04:32:14.397642    5849 fix.go:56] fixHost completed within 13.4525ms
	I0821 04:32:14.397647    5849 start.go:83] releasing machines lock for "no-preload-776000", held for 13.467542ms
	W0821 04:32:14.397654    5849 start.go:672] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0821 04:32:14.397688    5849 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0821 04:32:14.397692    5849 start.go:687] Will try again in 5 seconds ...
	I0821 04:32:19.398793    5849 start.go:365] acquiring machines lock for no-preload-776000: {Name:mk9b32d9fe994be32d77812db464b2cfa7bfb400 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0821 04:32:19.398911    5849 start.go:369] acquired machines lock for "no-preload-776000" in 79.875µs
	I0821 04:32:19.398939    5849 start.go:96] Skipping create...Using existing machine configuration
	I0821 04:32:19.398942    5849 fix.go:54] fixHost starting: 
	I0821 04:32:19.399085    5849 fix.go:102] recreateIfNeeded on no-preload-776000: state=Stopped err=<nil>
	W0821 04:32:19.399090    5849 fix.go:128] unexpected machine state, will restart: <nil>
	I0821 04:32:19.404181    5849 out.go:177] * Restarting existing qemu2 VM for "no-preload-776000" ...
	I0821 04:32:19.412317    5849 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17102-920/.minikube/machines/no-preload-776000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/no-preload-776000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17102-920/.minikube/machines/no-preload-776000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:19:10:9f:d3:83 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17102-920/.minikube/machines/no-preload-776000/disk.qcow2
	I0821 04:32:19.414098    5849 main.go:141] libmachine: STDOUT: 
	I0821 04:32:19.414114    5849 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0821 04:32:19.414132    5849 fix.go:56] fixHost completed within 15.1895ms
	I0821 04:32:19.414135    5849 start.go:83] releasing machines lock for "no-preload-776000", held for 15.216541ms
	W0821 04:32:19.414190    5849 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-776000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-776000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0821 04:32:19.423337    5849 out.go:177] 
	W0821 04:32:19.426232    5849 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0821 04:32:19.426238    5849 out.go:239] * 
	* 
	W0821 04:32:19.426773    5849 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0821 04:32:19.442329    5849 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-776000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.28.0-rc.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-776000 -n no-preload-776000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-776000 -n no-preload-776000: exit status 7 (34.6625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-776000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-644000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-644000 create -f testdata/busybox.yaml: exit status 1 (29.151458ms)

                                                
                                                
** stderr ** 
	error: no openapi getter

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-644000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-644000 -n embed-certs-644000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-644000 -n embed-certs-644000: exit status 7 (27.859834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-644000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-644000 -n embed-certs-644000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-644000 -n embed-certs-644000: exit status 7 (27.028625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-644000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-644000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-644000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-644000 describe deploy/metrics-server -n kube-system: exit status 1 (25.64125ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-644000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-644000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-644000 -n embed-certs-644000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-644000 -n embed-certs-644000: exit status 7 (27.890708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-644000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (5.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-644000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.27.4
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-644000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.27.4: exit status 80 (5.217015666s)

                                                
                                                
-- stdout --
	* [embed-certs-644000] minikube v1.31.2 on Darwin 13.5 (arm64)
	  - MINIKUBE_LOCATION=17102
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17102-920/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17102-920/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node embed-certs-644000 in cluster embed-certs-644000
	* Restarting existing qemu2 VM for "embed-certs-644000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-644000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0821 04:32:19.374189    5878 out.go:296] Setting OutFile to fd 1 ...
	I0821 04:32:19.374315    5878 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 04:32:19.374318    5878 out.go:309] Setting ErrFile to fd 2...
	I0821 04:32:19.374320    5878 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 04:32:19.374429    5878 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17102-920/.minikube/bin
	I0821 04:32:19.375480    5878 out.go:303] Setting JSON to false
	I0821 04:32:19.390430    5878 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3713,"bootTime":1692613826,"procs":418,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0821 04:32:19.390496    5878 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0821 04:32:19.394302    5878 out.go:177] * [embed-certs-644000] minikube v1.31.2 on Darwin 13.5 (arm64)
	I0821 04:32:19.404181    5878 out.go:177]   - MINIKUBE_LOCATION=17102
	I0821 04:32:19.401345    5878 notify.go:220] Checking for updates...
	I0821 04:32:19.415203    5878 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17102-920/kubeconfig
	I0821 04:32:19.426216    5878 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0821 04:32:19.442328    5878 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0821 04:32:19.450312    5878 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17102-920/.minikube
	I0821 04:32:19.457200    5878 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0821 04:32:19.461530    5878 config.go:182] Loaded profile config "embed-certs-644000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0821 04:32:19.461820    5878 driver.go:373] Setting default libvirt URI to qemu:///system
	I0821 04:32:19.465207    5878 out.go:177] * Using the qemu2 driver based on existing profile
	I0821 04:32:19.473268    5878 start.go:298] selected driver: qemu2
	I0821 04:32:19.473275    5878 start.go:902] validating driver "qemu2" against &{Name:embed-certs-644000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName
:embed-certs-644000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h
0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0821 04:32:19.473344    5878 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0821 04:32:19.475537    5878 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0821 04:32:19.475571    5878 cni.go:84] Creating CNI manager for ""
	I0821 04:32:19.475577    5878 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0821 04:32:19.475584    5878 start_flags.go:319] config:
	{Name:embed-certs-644000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:embed-certs-644000 Namespace:default APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L Moun
tGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0821 04:32:19.479071    5878 iso.go:125] acquiring lock: {Name:mk813ea611542195bb0511881888be3fabc72ff6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 04:32:19.486301    5878 out.go:177] * Starting control plane node embed-certs-644000 in cluster embed-certs-644000
	I0821 04:32:19.489206    5878 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime docker
	I0821 04:32:19.489226    5878 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17102-920/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4
	I0821 04:32:19.489234    5878 cache.go:57] Caching tarball of preloaded images
	I0821 04:32:19.489294    5878 preload.go:174] Found /Users/jenkins/minikube-integration/17102-920/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0821 04:32:19.489300    5878 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.4 on docker
	I0821 04:32:19.489358    5878 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/embed-certs-644000/config.json ...
	I0821 04:32:19.489594    5878 start.go:365] acquiring machines lock for embed-certs-644000: {Name:mk9b32d9fe994be32d77812db464b2cfa7bfb400 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0821 04:32:19.489621    5878 start.go:369] acquired machines lock for "embed-certs-644000" in 19.541µs
	I0821 04:32:19.489630    5878 start.go:96] Skipping create...Using existing machine configuration
	I0821 04:32:19.489633    5878 fix.go:54] fixHost starting: 
	I0821 04:32:19.489741    5878 fix.go:102] recreateIfNeeded on embed-certs-644000: state=Stopped err=<nil>
	W0821 04:32:19.489749    5878 fix.go:128] unexpected machine state, will restart: <nil>
	I0821 04:32:19.497196    5878 out.go:177] * Restarting existing qemu2 VM for "embed-certs-644000" ...
	I0821 04:32:19.500309    5878 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17102-920/.minikube/machines/embed-certs-644000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/embed-certs-644000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17102-920/.minikube/machines/embed-certs-644000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:4b:8d:66:35:b5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17102-920/.minikube/machines/embed-certs-644000/disk.qcow2
	I0821 04:32:19.502339    5878 main.go:141] libmachine: STDOUT: 
	I0821 04:32:19.502355    5878 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0821 04:32:19.502388    5878 fix.go:56] fixHost completed within 12.751541ms
	I0821 04:32:19.502393    5878 start.go:83] releasing machines lock for "embed-certs-644000", held for 12.768584ms
	W0821 04:32:19.502403    5878 start.go:672] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0821 04:32:19.502439    5878 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0821 04:32:19.502445    5878 start.go:687] Will try again in 5 seconds ...
	I0821 04:32:24.504510    5878 start.go:365] acquiring machines lock for embed-certs-644000: {Name:mk9b32d9fe994be32d77812db464b2cfa7bfb400 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0821 04:32:24.505019    5878 start.go:369] acquired machines lock for "embed-certs-644000" in 407.917µs
	I0821 04:32:24.505163    5878 start.go:96] Skipping create...Using existing machine configuration
	I0821 04:32:24.505189    5878 fix.go:54] fixHost starting: 
	I0821 04:32:24.505982    5878 fix.go:102] recreateIfNeeded on embed-certs-644000: state=Stopped err=<nil>
	W0821 04:32:24.506009    5878 fix.go:128] unexpected machine state, will restart: <nil>
	I0821 04:32:24.516395    5878 out.go:177] * Restarting existing qemu2 VM for "embed-certs-644000" ...
	I0821 04:32:24.520571    5878 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17102-920/.minikube/machines/embed-certs-644000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/embed-certs-644000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17102-920/.minikube/machines/embed-certs-644000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:4b:8d:66:35:b5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17102-920/.minikube/machines/embed-certs-644000/disk.qcow2
	I0821 04:32:24.529854    5878 main.go:141] libmachine: STDOUT: 
	I0821 04:32:24.529901    5878 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0821 04:32:24.529992    5878 fix.go:56] fixHost completed within 24.8065ms
	I0821 04:32:24.530009    5878 start.go:83] releasing machines lock for "embed-certs-644000", held for 24.967875ms
	W0821 04:32:24.530181    5878 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-644000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-644000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0821 04:32:24.538348    5878 out.go:177] 
	W0821 04:32:24.542464    5878 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0821 04:32:24.542496    5878 out.go:239] * 
	* 
	W0821 04:32:24.544965    5878 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0821 04:32:24.553376    5878 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-644000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.27.4": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-644000 -n embed-certs-644000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-644000 -n embed-certs-644000: exit status 7 (66.669458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-644000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (5.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-776000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-776000 -n no-preload-776000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-776000 -n no-preload-776000: exit status 7 (27.203667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-776000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.05s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-776000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-776000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-776000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (25.403125ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-776000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-776000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-776000 -n no-preload-776000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-776000 -n no-preload-776000: exit status 7 (28.100584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-776000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.05s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p no-preload-776000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p no-preload-776000 "sudo crictl images -o json": exit status 89 (37.546708ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p no-preload-776000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p no-preload-776000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p no-preload-776000"
start_stop_delete_test.go:304: v1.28.0-rc.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.9-0",
- 	"registry.k8s.io/kube-apiserver:v1.28.0-rc.1",
- 	"registry.k8s.io/kube-controller-manager:v1.28.0-rc.1",
- 	"registry.k8s.io/kube-proxy:v1.28.0-rc.1",
- 	"registry.k8s.io/kube-scheduler:v1.28.0-rc.1",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-776000 -n no-preload-776000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-776000 -n no-preload-776000: exit status 7 (27.809417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-776000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-776000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-776000 --alsologtostderr -v=1: exit status 89 (39.618084ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p no-preload-776000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0821 04:32:19.660491    5897 out.go:296] Setting OutFile to fd 1 ...
	I0821 04:32:19.660630    5897 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 04:32:19.660633    5897 out.go:309] Setting ErrFile to fd 2...
	I0821 04:32:19.660635    5897 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 04:32:19.660753    5897 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17102-920/.minikube/bin
	I0821 04:32:19.660966    5897 out.go:303] Setting JSON to false
	I0821 04:32:19.660974    5897 mustload.go:65] Loading cluster: no-preload-776000
	I0821 04:32:19.661147    5897 config.go:182] Loaded profile config "no-preload-776000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.0-rc.1
	I0821 04:32:19.665000    5897 out.go:177] * The control plane node must be running for this command
	I0821 04:32:19.669066    5897 out.go:177]   To start a cluster, run: "minikube start -p no-preload-776000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-776000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-776000 -n no-preload-776000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-776000 -n no-preload-776000: exit status 7 (27.762208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-776000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-776000 -n no-preload-776000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-776000 -n no-preload-776000: exit status 7 (27.763792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-776000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (10.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-202000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.27.4
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-202000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.27.4: exit status 80 (9.987019958s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-202000] minikube v1.31.2 on Darwin 13.5 (arm64)
	  - MINIKUBE_LOCATION=17102
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17102-920/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17102-920/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node default-k8s-diff-port-202000 in cluster default-k8s-diff-port-202000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-202000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0821 04:32:20.351514    5932 out.go:296] Setting OutFile to fd 1 ...
	I0821 04:32:20.351644    5932 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 04:32:20.351648    5932 out.go:309] Setting ErrFile to fd 2...
	I0821 04:32:20.351650    5932 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 04:32:20.351767    5932 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17102-920/.minikube/bin
	I0821 04:32:20.352849    5932 out.go:303] Setting JSON to false
	I0821 04:32:20.368932    5932 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3714,"bootTime":1692613826,"procs":418,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0821 04:32:20.369002    5932 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0821 04:32:20.374217    5932 out.go:177] * [default-k8s-diff-port-202000] minikube v1.31.2 on Darwin 13.5 (arm64)
	I0821 04:32:20.381239    5932 notify.go:220] Checking for updates...
	I0821 04:32:20.381241    5932 out.go:177]   - MINIKUBE_LOCATION=17102
	I0821 04:32:20.384221    5932 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17102-920/kubeconfig
	I0821 04:32:20.388033    5932 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0821 04:32:20.391176    5932 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0821 04:32:20.394220    5932 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17102-920/.minikube
	I0821 04:32:20.397222    5932 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0821 04:32:20.400512    5932 config.go:182] Loaded profile config "embed-certs-644000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0821 04:32:20.400567    5932 config.go:182] Loaded profile config "multinode-806000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0821 04:32:20.400605    5932 driver.go:373] Setting default libvirt URI to qemu:///system
	I0821 04:32:20.405158    5932 out.go:177] * Using the qemu2 driver based on user configuration
	I0821 04:32:20.412173    5932 start.go:298] selected driver: qemu2
	I0821 04:32:20.412180    5932 start.go:902] validating driver "qemu2" against <nil>
	I0821 04:32:20.412187    5932 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0821 04:32:20.414165    5932 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0821 04:32:20.417180    5932 out.go:177] * Automatically selected the socket_vmnet network
	I0821 04:32:20.420295    5932 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0821 04:32:20.420324    5932 cni.go:84] Creating CNI manager for ""
	I0821 04:32:20.420339    5932 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0821 04:32:20.420343    5932 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0821 04:32:20.420349    5932 start_flags.go:319] config:
	{Name:default-k8s-diff-port-202000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:default-k8s-diff-port-202000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:doc
ker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAg
entPID:0}
	I0821 04:32:20.424516    5932 iso.go:125] acquiring lock: {Name:mk813ea611542195bb0511881888be3fabc72ff6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 04:32:20.428209    5932 out.go:177] * Starting control plane node default-k8s-diff-port-202000 in cluster default-k8s-diff-port-202000
	I0821 04:32:20.435177    5932 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime docker
	I0821 04:32:20.435196    5932 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17102-920/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4
	I0821 04:32:20.435209    5932 cache.go:57] Caching tarball of preloaded images
	I0821 04:32:20.435288    5932 preload.go:174] Found /Users/jenkins/minikube-integration/17102-920/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0821 04:32:20.435294    5932 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.4 on docker
	I0821 04:32:20.435373    5932 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/default-k8s-diff-port-202000/config.json ...
	I0821 04:32:20.435392    5932 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/default-k8s-diff-port-202000/config.json: {Name:mk39990a7a365b9491209b2b49eea89177ed4b69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 04:32:20.435592    5932 start.go:365] acquiring machines lock for default-k8s-diff-port-202000: {Name:mk9b32d9fe994be32d77812db464b2cfa7bfb400 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0821 04:32:20.435625    5932 start.go:369] acquired machines lock for "default-k8s-diff-port-202000" in 23.958µs
	I0821 04:32:20.435636    5932 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-202000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27
.4 ClusterName:default-k8s-diff-port-202000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8444 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0821 04:32:20.435669    5932 start.go:125] createHost starting for "" (driver="qemu2")
	I0821 04:32:20.444171    5932 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0821 04:32:20.460205    5932 start.go:159] libmachine.API.Create for "default-k8s-diff-port-202000" (driver="qemu2")
	I0821 04:32:20.460232    5932 client.go:168] LocalClient.Create starting
	I0821 04:32:20.460287    5932 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca.pem
	I0821 04:32:20.460311    5932 main.go:141] libmachine: Decoding PEM data...
	I0821 04:32:20.460321    5932 main.go:141] libmachine: Parsing certificate...
	I0821 04:32:20.460359    5932 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17102-920/.minikube/certs/cert.pem
	I0821 04:32:20.460378    5932 main.go:141] libmachine: Decoding PEM data...
	I0821 04:32:20.460386    5932 main.go:141] libmachine: Parsing certificate...
	I0821 04:32:20.460712    5932 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17102-920/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17102-920/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0821 04:32:20.585383    5932 main.go:141] libmachine: Creating SSH key...
	I0821 04:32:20.880860    5932 main.go:141] libmachine: Creating Disk image...
	I0821 04:32:20.880873    5932 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0821 04:32:20.881070    5932 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17102-920/.minikube/machines/default-k8s-diff-port-202000/disk.qcow2.raw /Users/jenkins/minikube-integration/17102-920/.minikube/machines/default-k8s-diff-port-202000/disk.qcow2
	I0821 04:32:20.890221    5932 main.go:141] libmachine: STDOUT: 
	I0821 04:32:20.890252    5932 main.go:141] libmachine: STDERR: 
	I0821 04:32:20.890316    5932 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17102-920/.minikube/machines/default-k8s-diff-port-202000/disk.qcow2 +20000M
	I0821 04:32:20.897553    5932 main.go:141] libmachine: STDOUT: Image resized.
	
	I0821 04:32:20.897565    5932 main.go:141] libmachine: STDERR: 
	I0821 04:32:20.897588    5932 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17102-920/.minikube/machines/default-k8s-diff-port-202000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17102-920/.minikube/machines/default-k8s-diff-port-202000/disk.qcow2
	I0821 04:32:20.897598    5932 main.go:141] libmachine: Starting QEMU VM...
	I0821 04:32:20.897637    5932 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17102-920/.minikube/machines/default-k8s-diff-port-202000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/default-k8s-diff-port-202000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17102-920/.minikube/machines/default-k8s-diff-port-202000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:f7:63:c3:7d:cb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17102-920/.minikube/machines/default-k8s-diff-port-202000/disk.qcow2
	I0821 04:32:20.899158    5932 main.go:141] libmachine: STDOUT: 
	I0821 04:32:20.899170    5932 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0821 04:32:20.899191    5932 client.go:171] LocalClient.Create took 438.9595ms
	I0821 04:32:22.901432    5932 start.go:128] duration metric: createHost completed in 2.4657935s
	I0821 04:32:22.901486    5932 start.go:83] releasing machines lock for "default-k8s-diff-port-202000", held for 2.465895042s
	W0821 04:32:22.901544    5932 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0821 04:32:22.909834    5932 out.go:177] * Deleting "default-k8s-diff-port-202000" in qemu2 ...
	W0821 04:32:22.931640    5932 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0821 04:32:22.931664    5932 start.go:687] Will try again in 5 seconds ...
	I0821 04:32:27.933790    5932 start.go:365] acquiring machines lock for default-k8s-diff-port-202000: {Name:mk9b32d9fe994be32d77812db464b2cfa7bfb400 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0821 04:32:27.934258    5932 start.go:369] acquired machines lock for "default-k8s-diff-port-202000" in 363.625µs
	I0821 04:32:27.934384    5932 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-202000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27
.4 ClusterName:default-k8s-diff-port-202000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8444 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0821 04:32:27.934866    5932 start.go:125] createHost starting for "" (driver="qemu2")
	I0821 04:32:27.943253    5932 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0821 04:32:27.991344    5932 start.go:159] libmachine.API.Create for "default-k8s-diff-port-202000" (driver="qemu2")
	I0821 04:32:27.991394    5932 client.go:168] LocalClient.Create starting
	I0821 04:32:27.991493    5932 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca.pem
	I0821 04:32:27.991559    5932 main.go:141] libmachine: Decoding PEM data...
	I0821 04:32:27.991582    5932 main.go:141] libmachine: Parsing certificate...
	I0821 04:32:27.991647    5932 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17102-920/.minikube/certs/cert.pem
	I0821 04:32:27.991682    5932 main.go:141] libmachine: Decoding PEM data...
	I0821 04:32:27.991696    5932 main.go:141] libmachine: Parsing certificate...
	I0821 04:32:27.992423    5932 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17102-920/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17102-920/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0821 04:32:28.127899    5932 main.go:141] libmachine: Creating SSH key...
	I0821 04:32:28.247246    5932 main.go:141] libmachine: Creating Disk image...
	I0821 04:32:28.247257    5932 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0821 04:32:28.247389    5932 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17102-920/.minikube/machines/default-k8s-diff-port-202000/disk.qcow2.raw /Users/jenkins/minikube-integration/17102-920/.minikube/machines/default-k8s-diff-port-202000/disk.qcow2
	I0821 04:32:28.256222    5932 main.go:141] libmachine: STDOUT: 
	I0821 04:32:28.256237    5932 main.go:141] libmachine: STDERR: 
	I0821 04:32:28.256292    5932 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17102-920/.minikube/machines/default-k8s-diff-port-202000/disk.qcow2 +20000M
	I0821 04:32:28.263487    5932 main.go:141] libmachine: STDOUT: Image resized.
	
	I0821 04:32:28.263500    5932 main.go:141] libmachine: STDERR: 
	I0821 04:32:28.263513    5932 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17102-920/.minikube/machines/default-k8s-diff-port-202000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17102-920/.minikube/machines/default-k8s-diff-port-202000/disk.qcow2
	I0821 04:32:28.263520    5932 main.go:141] libmachine: Starting QEMU VM...
	I0821 04:32:28.263564    5932 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17102-920/.minikube/machines/default-k8s-diff-port-202000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/default-k8s-diff-port-202000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17102-920/.minikube/machines/default-k8s-diff-port-202000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:28:2b:05:ab:40 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17102-920/.minikube/machines/default-k8s-diff-port-202000/disk.qcow2
	I0821 04:32:28.265128    5932 main.go:141] libmachine: STDOUT: 
	I0821 04:32:28.265144    5932 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0821 04:32:28.265160    5932 client.go:171] LocalClient.Create took 273.766125ms
	I0821 04:32:30.267331    5932 start.go:128] duration metric: createHost completed in 2.332462791s
	I0821 04:32:30.267430    5932 start.go:83] releasing machines lock for "default-k8s-diff-port-202000", held for 2.33315175s
	W0821 04:32:30.267889    5932 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-202000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-202000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0821 04:32:30.278454    5932 out.go:177] 
	W0821 04:32:30.282513    5932 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0821 04:32:30.282552    5932 out.go:239] * 
	* 
	W0821 04:32:30.285088    5932 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0821 04:32:30.298437    5932 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-202000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.27.4": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-202000 -n default-k8s-diff-port-202000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-202000 -n default-k8s-diff-port-202000: exit status 7 (66.545625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-202000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (10.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-644000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-644000 -n embed-certs-644000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-644000 -n embed-certs-644000: exit status 7 (31.143417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-644000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.05s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-644000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-644000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-644000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (25.1645ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-644000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-644000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-644000 -n embed-certs-644000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-644000 -n embed-certs-644000: exit status 7 (28.241833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-644000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p embed-certs-644000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p embed-certs-644000 "sudo crictl images -o json": exit status 89 (38.162167ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p embed-certs-644000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p embed-certs-644000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p embed-certs-644000"
start_stop_delete_test.go:304: v1.27.4 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.7-0",
- 	"registry.k8s.io/kube-apiserver:v1.27.4",
- 	"registry.k8s.io/kube-controller-manager:v1.27.4",
- 	"registry.k8s.io/kube-proxy:v1.27.4",
- 	"registry.k8s.io/kube-scheduler:v1.27.4",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-644000 -n embed-certs-644000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-644000 -n embed-certs-644000: exit status 7 (27.639833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-644000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-644000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-644000 --alsologtostderr -v=1: exit status 89 (39.886125ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p embed-certs-644000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0821 04:32:24.811561    5954 out.go:296] Setting OutFile to fd 1 ...
	I0821 04:32:24.811720    5954 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 04:32:24.811723    5954 out.go:309] Setting ErrFile to fd 2...
	I0821 04:32:24.811726    5954 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 04:32:24.811835    5954 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17102-920/.minikube/bin
	I0821 04:32:24.812038    5954 out.go:303] Setting JSON to false
	I0821 04:32:24.812049    5954 mustload.go:65] Loading cluster: embed-certs-644000
	I0821 04:32:24.812258    5954 config.go:182] Loaded profile config "embed-certs-644000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0821 04:32:24.816380    5954 out.go:177] * The control plane node must be running for this command
	I0821 04:32:24.820458    5954 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-644000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-644000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-644000 -n embed-certs-644000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-644000 -n embed-certs-644000: exit status 7 (28.059875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-644000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-644000 -n embed-certs-644000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-644000 -n embed-certs-644000: exit status 7 (27.734792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-644000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (9.85s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-600000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.28.0-rc.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-600000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.28.0-rc.1: exit status 80 (9.782680334s)

                                                
                                                
-- stdout --
	* [newest-cni-600000] minikube v1.31.2 on Darwin 13.5 (arm64)
	  - MINIKUBE_LOCATION=17102
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17102-920/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17102-920/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node newest-cni-600000 in cluster newest-cni-600000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-600000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0821 04:32:25.276882    5977 out.go:296] Setting OutFile to fd 1 ...
	I0821 04:32:25.276997    5977 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 04:32:25.277000    5977 out.go:309] Setting ErrFile to fd 2...
	I0821 04:32:25.277002    5977 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 04:32:25.277110    5977 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17102-920/.minikube/bin
	I0821 04:32:25.278095    5977 out.go:303] Setting JSON to false
	I0821 04:32:25.293303    5977 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3719,"bootTime":1692613826,"procs":418,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0821 04:32:25.293370    5977 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0821 04:32:25.297862    5977 out.go:177] * [newest-cni-600000] minikube v1.31.2 on Darwin 13.5 (arm64)
	I0821 04:32:25.308797    5977 out.go:177]   - MINIKUBE_LOCATION=17102
	I0821 04:32:25.308838    5977 notify.go:220] Checking for updates...
	I0821 04:32:25.313102    5977 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17102-920/kubeconfig
	I0821 04:32:25.315797    5977 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0821 04:32:25.318804    5977 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0821 04:32:25.321792    5977 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17102-920/.minikube
	I0821 04:32:25.324744    5977 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0821 04:32:25.328107    5977 config.go:182] Loaded profile config "default-k8s-diff-port-202000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0821 04:32:25.328166    5977 config.go:182] Loaded profile config "multinode-806000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0821 04:32:25.328212    5977 driver.go:373] Setting default libvirt URI to qemu:///system
	I0821 04:32:25.332775    5977 out.go:177] * Using the qemu2 driver based on user configuration
	I0821 04:32:25.339778    5977 start.go:298] selected driver: qemu2
	I0821 04:32:25.339786    5977 start.go:902] validating driver "qemu2" against <nil>
	I0821 04:32:25.339793    5977 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0821 04:32:25.341766    5977 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	W0821 04:32:25.341789    5977 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0821 04:32:25.349793    5977 out.go:177] * Automatically selected the socket_vmnet network
	I0821 04:32:25.352816    5977 start_flags.go:938] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0821 04:32:25.352836    5977 cni.go:84] Creating CNI manager for ""
	I0821 04:32:25.352844    5977 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0821 04:32:25.352849    5977 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0821 04:32:25.352855    5977 start_flags.go:319] config:
	{Name:newest-cni-600000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0-rc.1 ClusterName:newest-cni-600000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Ne
tworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client
SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0821 04:32:25.357533    5977 iso.go:125] acquiring lock: {Name:mk813ea611542195bb0511881888be3fabc72ff6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 04:32:25.365790    5977 out.go:177] * Starting control plane node newest-cni-600000 in cluster newest-cni-600000
	I0821 04:32:25.369746    5977 preload.go:132] Checking if preload exists for k8s version v1.28.0-rc.1 and runtime docker
	I0821 04:32:25.369766    5977 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17102-920/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-rc.1-docker-overlay2-arm64.tar.lz4
	I0821 04:32:25.369781    5977 cache.go:57] Caching tarball of preloaded images
	I0821 04:32:25.369889    5977 preload.go:174] Found /Users/jenkins/minikube-integration/17102-920/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-rc.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0821 04:32:25.369895    5977 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.0-rc.1 on docker
	I0821 04:32:25.369977    5977 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/newest-cni-600000/config.json ...
	I0821 04:32:25.369992    5977 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/newest-cni-600000/config.json: {Name:mk2c829fccc0110a60d1e0bda216fe5e4c9ad07e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 04:32:25.370199    5977 start.go:365] acquiring machines lock for newest-cni-600000: {Name:mk9b32d9fe994be32d77812db464b2cfa7bfb400 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0821 04:32:25.370229    5977 start.go:369] acquired machines lock for "newest-cni-600000" in 24.125µs
	I0821 04:32:25.370240    5977 start.go:93] Provisioning new machine with config: &{Name:newest-cni-600000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0-rc.1 Clu
sterName:newest-cni-600000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0821 04:32:25.370276    5977 start.go:125] createHost starting for "" (driver="qemu2")
	I0821 04:32:25.374822    5977 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0821 04:32:25.390362    5977 start.go:159] libmachine.API.Create for "newest-cni-600000" (driver="qemu2")
	I0821 04:32:25.390383    5977 client.go:168] LocalClient.Create starting
	I0821 04:32:25.390429    5977 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca.pem
	I0821 04:32:25.390459    5977 main.go:141] libmachine: Decoding PEM data...
	I0821 04:32:25.390468    5977 main.go:141] libmachine: Parsing certificate...
	I0821 04:32:25.390507    5977 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17102-920/.minikube/certs/cert.pem
	I0821 04:32:25.390524    5977 main.go:141] libmachine: Decoding PEM data...
	I0821 04:32:25.390534    5977 main.go:141] libmachine: Parsing certificate...
	I0821 04:32:25.390846    5977 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17102-920/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17102-920/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0821 04:32:25.510605    5977 main.go:141] libmachine: Creating SSH key...
	I0821 04:32:25.654764    5977 main.go:141] libmachine: Creating Disk image...
	I0821 04:32:25.654772    5977 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0821 04:32:25.654936    5977 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17102-920/.minikube/machines/newest-cni-600000/disk.qcow2.raw /Users/jenkins/minikube-integration/17102-920/.minikube/machines/newest-cni-600000/disk.qcow2
	I0821 04:32:25.663931    5977 main.go:141] libmachine: STDOUT: 
	I0821 04:32:25.663942    5977 main.go:141] libmachine: STDERR: 
	I0821 04:32:25.663990    5977 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17102-920/.minikube/machines/newest-cni-600000/disk.qcow2 +20000M
	I0821 04:32:25.671226    5977 main.go:141] libmachine: STDOUT: Image resized.
	
	I0821 04:32:25.671238    5977 main.go:141] libmachine: STDERR: 
	I0821 04:32:25.671250    5977 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17102-920/.minikube/machines/newest-cni-600000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17102-920/.minikube/machines/newest-cni-600000/disk.qcow2
	I0821 04:32:25.671256    5977 main.go:141] libmachine: Starting QEMU VM...
	I0821 04:32:25.671291    5977 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17102-920/.minikube/machines/newest-cni-600000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/newest-cni-600000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17102-920/.minikube/machines/newest-cni-600000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:9f:26:e7:74:d9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17102-920/.minikube/machines/newest-cni-600000/disk.qcow2
	I0821 04:32:25.672743    5977 main.go:141] libmachine: STDOUT: 
	I0821 04:32:25.672754    5977 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0821 04:32:25.672791    5977 client.go:171] LocalClient.Create took 282.390084ms
	I0821 04:32:27.674928    5977 start.go:128] duration metric: createHost completed in 2.304677917s
	I0821 04:32:27.674984    5977 start.go:83] releasing machines lock for "newest-cni-600000", held for 2.30479075s
	W0821 04:32:27.675039    5977 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0821 04:32:27.683250    5977 out.go:177] * Deleting "newest-cni-600000" in qemu2 ...
	W0821 04:32:27.704794    5977 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0821 04:32:27.704825    5977 start.go:687] Will try again in 5 seconds ...
	I0821 04:32:32.707032    5977 start.go:365] acquiring machines lock for newest-cni-600000: {Name:mk9b32d9fe994be32d77812db464b2cfa7bfb400 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0821 04:32:32.707468    5977 start.go:369] acquired machines lock for "newest-cni-600000" in 313.583µs
	I0821 04:32:32.707595    5977 start.go:93] Provisioning new machine with config: &{Name:newest-cni-600000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0-rc.1 Clu
sterName:newest-cni-600000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0821 04:32:32.707888    5977 start.go:125] createHost starting for "" (driver="qemu2")
	I0821 04:32:32.716435    5977 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0821 04:32:32.762013    5977 start.go:159] libmachine.API.Create for "newest-cni-600000" (driver="qemu2")
	I0821 04:32:32.762059    5977 client.go:168] LocalClient.Create starting
	I0821 04:32:32.762160    5977 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17102-920/.minikube/certs/ca.pem
	I0821 04:32:32.762203    5977 main.go:141] libmachine: Decoding PEM data...
	I0821 04:32:32.762219    5977 main.go:141] libmachine: Parsing certificate...
	I0821 04:32:32.762282    5977 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17102-920/.minikube/certs/cert.pem
	I0821 04:32:32.762309    5977 main.go:141] libmachine: Decoding PEM data...
	I0821 04:32:32.762321    5977 main.go:141] libmachine: Parsing certificate...
	I0821 04:32:32.762799    5977 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17102-920/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17102-920/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0821 04:32:32.897373    5977 main.go:141] libmachine: Creating SSH key...
	I0821 04:32:32.975089    5977 main.go:141] libmachine: Creating Disk image...
	I0821 04:32:32.975095    5977 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0821 04:32:32.975233    5977 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17102-920/.minikube/machines/newest-cni-600000/disk.qcow2.raw /Users/jenkins/minikube-integration/17102-920/.minikube/machines/newest-cni-600000/disk.qcow2
	I0821 04:32:32.983719    5977 main.go:141] libmachine: STDOUT: 
	I0821 04:32:32.983735    5977 main.go:141] libmachine: STDERR: 
	I0821 04:32:32.983801    5977 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17102-920/.minikube/machines/newest-cni-600000/disk.qcow2 +20000M
	I0821 04:32:32.991066    5977 main.go:141] libmachine: STDOUT: Image resized.
	
	I0821 04:32:32.991090    5977 main.go:141] libmachine: STDERR: 
	I0821 04:32:32.991103    5977 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17102-920/.minikube/machines/newest-cni-600000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17102-920/.minikube/machines/newest-cni-600000/disk.qcow2
	I0821 04:32:32.991109    5977 main.go:141] libmachine: Starting QEMU VM...
	I0821 04:32:32.991148    5977 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17102-920/.minikube/machines/newest-cni-600000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/newest-cni-600000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17102-920/.minikube/machines/newest-cni-600000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:40:77:96:24:23 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17102-920/.minikube/machines/newest-cni-600000/disk.qcow2
	I0821 04:32:32.992716    5977 main.go:141] libmachine: STDOUT: 
	I0821 04:32:32.992732    5977 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0821 04:32:32.992745    5977 client.go:171] LocalClient.Create took 230.680583ms
	I0821 04:32:34.994976    5977 start.go:128] duration metric: createHost completed in 2.287074375s
	I0821 04:32:34.995065    5977 start.go:83] releasing machines lock for "newest-cni-600000", held for 2.287612166s
	W0821 04:32:34.995536    5977 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-600000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-600000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0821 04:32:35.004245    5977 out.go:177] 
	W0821 04:32:35.009363    5977 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0821 04:32:35.009417    5977 out.go:239] * 
	* 
	W0821 04:32:35.012051    5977 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0821 04:32:35.020218    5977 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-600000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.28.0-rc.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-600000 -n newest-cni-600000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-600000 -n newest-cni-600000: exit status 7 (62.802ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-600000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (9.85s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-202000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-202000 create -f testdata/busybox.yaml: exit status 1 (28.919542ms)

                                                
                                                
** stderr ** 
	error: no openapi getter

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-202000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-202000 -n default-k8s-diff-port-202000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-202000 -n default-k8s-diff-port-202000: exit status 7 (27.887ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-202000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-202000 -n default-k8s-diff-port-202000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-202000 -n default-k8s-diff-port-202000: exit status 7 (27.422459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-202000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-202000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-202000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-202000 describe deploy/metrics-server -n kube-system: exit status 1 (25.313333ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-202000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-202000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-202000 -n default-k8s-diff-port-202000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-202000 -n default-k8s-diff-port-202000: exit status 7 (28.030666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-202000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-202000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.27.4
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-202000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.27.4: exit status 80 (5.176524s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-202000] minikube v1.31.2 on Darwin 13.5 (arm64)
	  - MINIKUBE_LOCATION=17102
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17102-920/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17102-920/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node default-k8s-diff-port-202000 in cluster default-k8s-diff-port-202000
	* Restarting existing qemu2 VM for "default-k8s-diff-port-202000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-202000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0821 04:32:30.751637    6009 out.go:296] Setting OutFile to fd 1 ...
	I0821 04:32:30.751742    6009 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 04:32:30.751744    6009 out.go:309] Setting ErrFile to fd 2...
	I0821 04:32:30.751746    6009 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 04:32:30.751855    6009 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17102-920/.minikube/bin
	I0821 04:32:30.752883    6009 out.go:303] Setting JSON to false
	I0821 04:32:30.767923    6009 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3724,"bootTime":1692613826,"procs":418,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0821 04:32:30.767995    6009 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0821 04:32:30.772677    6009 out.go:177] * [default-k8s-diff-port-202000] minikube v1.31.2 on Darwin 13.5 (arm64)
	I0821 04:32:30.779591    6009 out.go:177]   - MINIKUBE_LOCATION=17102
	I0821 04:32:30.779640    6009 notify.go:220] Checking for updates...
	I0821 04:32:30.786689    6009 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17102-920/kubeconfig
	I0821 04:32:30.789581    6009 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0821 04:32:30.792624    6009 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0821 04:32:30.796632    6009 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17102-920/.minikube
	I0821 04:32:30.799592    6009 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0821 04:32:30.802953    6009 config.go:182] Loaded profile config "default-k8s-diff-port-202000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0821 04:32:30.803194    6009 driver.go:373] Setting default libvirt URI to qemu:///system
	I0821 04:32:30.806592    6009 out.go:177] * Using the qemu2 driver based on existing profile
	I0821 04:32:30.813627    6009 start.go:298] selected driver: qemu2
	I0821 04:32:30.813634    6009 start.go:902] validating driver "qemu2" against &{Name:default-k8s-diff-port-202000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4
ClusterName:default-k8s-diff-port-202000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0821 04:32:30.813693    6009 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0821 04:32:30.815651    6009 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0821 04:32:30.815676    6009 cni.go:84] Creating CNI manager for ""
	I0821 04:32:30.815682    6009 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0821 04:32:30.815688    6009 start_flags.go:319] config:
	{Name:default-k8s-diff-port-202000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:default-k8s-diff-port-202000 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9P
Version:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0821 04:32:30.819843    6009 iso.go:125] acquiring lock: {Name:mk813ea611542195bb0511881888be3fabc72ff6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 04:32:30.828609    6009 out.go:177] * Starting control plane node default-k8s-diff-port-202000 in cluster default-k8s-diff-port-202000
	I0821 04:32:30.832629    6009 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime docker
	I0821 04:32:30.832648    6009 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17102-920/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4
	I0821 04:32:30.832731    6009 cache.go:57] Caching tarball of preloaded images
	I0821 04:32:30.832790    6009 preload.go:174] Found /Users/jenkins/minikube-integration/17102-920/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0821 04:32:30.832795    6009 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.4 on docker
	I0821 04:32:30.832868    6009 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/default-k8s-diff-port-202000/config.json ...
	I0821 04:32:30.833175    6009 start.go:365] acquiring machines lock for default-k8s-diff-port-202000: {Name:mk9b32d9fe994be32d77812db464b2cfa7bfb400 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0821 04:32:30.833200    6009 start.go:369] acquired machines lock for "default-k8s-diff-port-202000" in 19.208µs
	I0821 04:32:30.833209    6009 start.go:96] Skipping create...Using existing machine configuration
	I0821 04:32:30.833214    6009 fix.go:54] fixHost starting: 
	I0821 04:32:30.833324    6009 fix.go:102] recreateIfNeeded on default-k8s-diff-port-202000: state=Stopped err=<nil>
	W0821 04:32:30.833332    6009 fix.go:128] unexpected machine state, will restart: <nil>
	I0821 04:32:30.840645    6009 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-202000" ...
	I0821 04:32:30.844672    6009 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17102-920/.minikube/machines/default-k8s-diff-port-202000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/default-k8s-diff-port-202000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17102-920/.minikube/machines/default-k8s-diff-port-202000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:28:2b:05:ab:40 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17102-920/.minikube/machines/default-k8s-diff-port-202000/disk.qcow2
	I0821 04:32:30.846605    6009 main.go:141] libmachine: STDOUT: 
	I0821 04:32:30.846620    6009 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0821 04:32:30.846647    6009 fix.go:56] fixHost completed within 13.431125ms
	I0821 04:32:30.846652    6009 start.go:83] releasing machines lock for "default-k8s-diff-port-202000", held for 13.448208ms
	W0821 04:32:30.846658    6009 start.go:672] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0821 04:32:30.846684    6009 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0821 04:32:30.846688    6009 start.go:687] Will try again in 5 seconds ...
	I0821 04:32:35.848709    6009 start.go:365] acquiring machines lock for default-k8s-diff-port-202000: {Name:mk9b32d9fe994be32d77812db464b2cfa7bfb400 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0821 04:32:35.849073    6009 start.go:369] acquired machines lock for "default-k8s-diff-port-202000" in 279.75µs
	I0821 04:32:35.849218    6009 start.go:96] Skipping create...Using existing machine configuration
	I0821 04:32:35.849236    6009 fix.go:54] fixHost starting: 
	I0821 04:32:35.849962    6009 fix.go:102] recreateIfNeeded on default-k8s-diff-port-202000: state=Stopped err=<nil>
	W0821 04:32:35.849985    6009 fix.go:128] unexpected machine state, will restart: <nil>
	I0821 04:32:35.858448    6009 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-202000" ...
	I0821 04:32:35.862516    6009 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17102-920/.minikube/machines/default-k8s-diff-port-202000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/default-k8s-diff-port-202000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17102-920/.minikube/machines/default-k8s-diff-port-202000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:28:2b:05:ab:40 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17102-920/.minikube/machines/default-k8s-diff-port-202000/disk.qcow2
	I0821 04:32:35.870726    6009 main.go:141] libmachine: STDOUT: 
	I0821 04:32:35.870794    6009 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0821 04:32:35.870881    6009 fix.go:56] fixHost completed within 21.644ms
	I0821 04:32:35.870899    6009 start.go:83] releasing machines lock for "default-k8s-diff-port-202000", held for 21.803917ms
	W0821 04:32:35.871056    6009 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-202000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-202000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0821 04:32:35.877476    6009 out.go:177] 
	W0821 04:32:35.880484    6009 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0821 04:32:35.880509    6009 out.go:239] * 
	* 
	W0821 04:32:35.882963    6009 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0821 04:32:35.889458    6009 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-202000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.27.4": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-202000 -n default-k8s-diff-port-202000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-202000 -n default-k8s-diff-port-202000: exit status 7 (66.775458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-202000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-600000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.28.0-rc.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-600000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.28.0-rc.1: exit status 80 (5.173221792s)

                                                
                                                
-- stdout --
	* [newest-cni-600000] minikube v1.31.2 on Darwin 13.5 (arm64)
	  - MINIKUBE_LOCATION=17102
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17102-920/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17102-920/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node newest-cni-600000 in cluster newest-cni-600000
	* Restarting existing qemu2 VM for "newest-cni-600000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-600000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0821 04:32:35.338816    6030 out.go:296] Setting OutFile to fd 1 ...
	I0821 04:32:35.338942    6030 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 04:32:35.338945    6030 out.go:309] Setting ErrFile to fd 2...
	I0821 04:32:35.338947    6030 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 04:32:35.339054    6030 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17102-920/.minikube/bin
	I0821 04:32:35.339991    6030 out.go:303] Setting JSON to false
	I0821 04:32:35.354862    6030 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3729,"bootTime":1692613826,"procs":418,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0821 04:32:35.354929    6030 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0821 04:32:35.358912    6030 out.go:177] * [newest-cni-600000] minikube v1.31.2 on Darwin 13.5 (arm64)
	I0821 04:32:35.365897    6030 out.go:177]   - MINIKUBE_LOCATION=17102
	I0821 04:32:35.365977    6030 notify.go:220] Checking for updates...
	I0821 04:32:35.368891    6030 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17102-920/kubeconfig
	I0821 04:32:35.372831    6030 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0821 04:32:35.375858    6030 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0821 04:32:35.378871    6030 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17102-920/.minikube
	I0821 04:32:35.381789    6030 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0821 04:32:35.385140    6030 config.go:182] Loaded profile config "newest-cni-600000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.0-rc.1
	I0821 04:32:35.385394    6030 driver.go:373] Setting default libvirt URI to qemu:///system
	I0821 04:32:35.388860    6030 out.go:177] * Using the qemu2 driver based on existing profile
	I0821 04:32:35.395857    6030 start.go:298] selected driver: qemu2
	I0821 04:32:35.395864    6030 start.go:902] validating driver "qemu2" against &{Name:newest-cni-600000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0-rc.1 Cluste
rName:newest-cni-600000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddre
ss: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0821 04:32:35.395918    6030 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0821 04:32:35.397873    6030 start_flags.go:938] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0821 04:32:35.397897    6030 cni.go:84] Creating CNI manager for ""
	I0821 04:32:35.397903    6030 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0821 04:32:35.397908    6030 start_flags.go:319] config:
	{Name:newest-cni-600000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0-rc.1 ClusterName:newest-cni-600000 Namespace:default APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpi
ration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0821 04:32:35.401762    6030 iso.go:125] acquiring lock: {Name:mk813ea611542195bb0511881888be3fabc72ff6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 04:32:35.408835    6030 out.go:177] * Starting control plane node newest-cni-600000 in cluster newest-cni-600000
	I0821 04:32:35.412861    6030 preload.go:132] Checking if preload exists for k8s version v1.28.0-rc.1 and runtime docker
	I0821 04:32:35.412884    6030 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17102-920/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-rc.1-docker-overlay2-arm64.tar.lz4
	I0821 04:32:35.412901    6030 cache.go:57] Caching tarball of preloaded images
	I0821 04:32:35.412964    6030 preload.go:174] Found /Users/jenkins/minikube-integration/17102-920/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-rc.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0821 04:32:35.412970    6030 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.0-rc.1 on docker
	I0821 04:32:35.413034    6030 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/newest-cni-600000/config.json ...
	I0821 04:32:35.413399    6030 start.go:365] acquiring machines lock for newest-cni-600000: {Name:mk9b32d9fe994be32d77812db464b2cfa7bfb400 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0821 04:32:35.413427    6030 start.go:369] acquired machines lock for "newest-cni-600000" in 22.583µs
	I0821 04:32:35.413436    6030 start.go:96] Skipping create...Using existing machine configuration
	I0821 04:32:35.413439    6030 fix.go:54] fixHost starting: 
	I0821 04:32:35.413550    6030 fix.go:102] recreateIfNeeded on newest-cni-600000: state=Stopped err=<nil>
	W0821 04:32:35.413558    6030 fix.go:128] unexpected machine state, will restart: <nil>
	I0821 04:32:35.416888    6030 out.go:177] * Restarting existing qemu2 VM for "newest-cni-600000" ...
	I0821 04:32:35.423822    6030 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17102-920/.minikube/machines/newest-cni-600000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/newest-cni-600000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17102-920/.minikube/machines/newest-cni-600000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:40:77:96:24:23 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17102-920/.minikube/machines/newest-cni-600000/disk.qcow2
	I0821 04:32:35.425570    6030 main.go:141] libmachine: STDOUT: 
	I0821 04:32:35.425608    6030 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0821 04:32:35.425645    6030 fix.go:56] fixHost completed within 12.203417ms
	I0821 04:32:35.425650    6030 start.go:83] releasing machines lock for "newest-cni-600000", held for 12.219292ms
	W0821 04:32:35.425656    6030 start.go:672] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0821 04:32:35.425681    6030 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0821 04:32:35.425685    6030 start.go:687] Will try again in 5 seconds ...
	I0821 04:32:40.427720    6030 start.go:365] acquiring machines lock for newest-cni-600000: {Name:mk9b32d9fe994be32d77812db464b2cfa7bfb400 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0821 04:32:40.428244    6030 start.go:369] acquired machines lock for "newest-cni-600000" in 440.916µs
	I0821 04:32:40.428386    6030 start.go:96] Skipping create...Using existing machine configuration
	I0821 04:32:40.428408    6030 fix.go:54] fixHost starting: 
	I0821 04:32:40.429084    6030 fix.go:102] recreateIfNeeded on newest-cni-600000: state=Stopped err=<nil>
	W0821 04:32:40.429112    6030 fix.go:128] unexpected machine state, will restart: <nil>
	I0821 04:32:40.437456    6030 out.go:177] * Restarting existing qemu2 VM for "newest-cni-600000" ...
	I0821 04:32:40.441647    6030 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17102-920/.minikube/machines/newest-cni-600000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/newest-cni-600000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17102-920/.minikube/machines/newest-cni-600000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:40:77:96:24:23 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17102-920/.minikube/machines/newest-cni-600000/disk.qcow2
	I0821 04:32:40.451129    6030 main.go:141] libmachine: STDOUT: 
	I0821 04:32:40.451209    6030 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0821 04:32:40.451318    6030 fix.go:56] fixHost completed within 22.909041ms
	I0821 04:32:40.451342    6030 start.go:83] releasing machines lock for "newest-cni-600000", held for 23.073ms
	W0821 04:32:40.451618    6030 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-600000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-600000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0821 04:32:40.460452    6030 out.go:177] 
	W0821 04:32:40.463556    6030 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0821 04:32:40.463597    6030 out.go:239] * 
	* 
	W0821 04:32:40.465879    6030 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0821 04:32:40.473441    6030 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-600000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.28.0-rc.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-600000 -n newest-cni-600000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-600000 -n newest-cni-600000: exit status 7 (68.380333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-600000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-202000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-202000 -n default-k8s-diff-port-202000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-202000 -n default-k8s-diff-port-202000: exit status 7 (30.427291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-202000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.05s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-202000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-202000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-202000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (25.004417ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-202000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-202000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-202000 -n default-k8s-diff-port-202000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-202000 -n default-k8s-diff-port-202000: exit status 7 (28.122958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-202000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.05s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p default-k8s-diff-port-202000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p default-k8s-diff-port-202000 "sudo crictl images -o json": exit status 89 (39.491708ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-202000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p default-k8s-diff-port-202000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p default-k8s-diff-port-202000"
start_stop_delete_test.go:304: v1.27.4 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.7-0",
- 	"registry.k8s.io/kube-apiserver:v1.27.4",
- 	"registry.k8s.io/kube-controller-manager:v1.27.4",
- 	"registry.k8s.io/kube-proxy:v1.27.4",
- 	"registry.k8s.io/kube-scheduler:v1.27.4",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-202000 -n default-k8s-diff-port-202000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-202000 -n default-k8s-diff-port-202000: exit status 7 (27.696166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-202000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-202000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-202000 --alsologtostderr -v=1: exit status 89 (38.62275ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-202000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0821 04:32:36.149158    6049 out.go:296] Setting OutFile to fd 1 ...
	I0821 04:32:36.149312    6049 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 04:32:36.149314    6049 out.go:309] Setting ErrFile to fd 2...
	I0821 04:32:36.149317    6049 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 04:32:36.149426    6049 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17102-920/.minikube/bin
	I0821 04:32:36.149632    6049 out.go:303] Setting JSON to false
	I0821 04:32:36.149640    6049 mustload.go:65] Loading cluster: default-k8s-diff-port-202000
	I0821 04:32:36.149817    6049 config.go:182] Loaded profile config "default-k8s-diff-port-202000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0821 04:32:36.153267    6049 out.go:177] * The control plane node must be running for this command
	I0821 04:32:36.157376    6049 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-202000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-202000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-202000 -n default-k8s-diff-port-202000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-202000 -n default-k8s-diff-port-202000: exit status 7 (27.868041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-202000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-202000 -n default-k8s-diff-port-202000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-202000 -n default-k8s-diff-port-202000: exit status 7 (27.407708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-202000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p newest-cni-600000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p newest-cni-600000 "sudo crictl images -o json": exit status 89 (46.311542ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p newest-cni-600000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p newest-cni-600000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p newest-cni-600000"
start_stop_delete_test.go:304: v1.28.0-rc.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.9-0",
- 	"registry.k8s.io/kube-apiserver:v1.28.0-rc.1",
- 	"registry.k8s.io/kube-controller-manager:v1.28.0-rc.1",
- 	"registry.k8s.io/kube-proxy:v1.28.0-rc.1",
- 	"registry.k8s.io/kube-scheduler:v1.28.0-rc.1",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-600000 -n newest-cni-600000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-600000 -n newest-cni-600000: exit status 7 (28.407458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-600000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-600000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-600000 --alsologtostderr -v=1: exit status 89 (40.195458ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p newest-cni-600000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0821 04:32:40.658006    6079 out.go:296] Setting OutFile to fd 1 ...
	I0821 04:32:40.658154    6079 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 04:32:40.658157    6079 out.go:309] Setting ErrFile to fd 2...
	I0821 04:32:40.658159    6079 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 04:32:40.658275    6079 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17102-920/.minikube/bin
	I0821 04:32:40.658482    6079 out.go:303] Setting JSON to false
	I0821 04:32:40.658489    6079 mustload.go:65] Loading cluster: newest-cni-600000
	I0821 04:32:40.658684    6079 config.go:182] Loaded profile config "newest-cni-600000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.0-rc.1
	I0821 04:32:40.662754    6079 out.go:177] * The control plane node must be running for this command
	I0821 04:32:40.666841    6079 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-600000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-600000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-600000 -n newest-cni-600000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-600000 -n newest-cni-600000: exit status 7 (28.446542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-600000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-600000 -n newest-cni-600000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-600000 -n newest-cni-600000: exit status 7 (28.294958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-600000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.10s)

                                                
                                    

Test pass (142/261)

Order passed test Duration
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.09
10 TestDownloadOnly/v1.27.4/json-events 8.69
11 TestDownloadOnly/v1.27.4/preload-exists 0
14 TestDownloadOnly/v1.27.4/kubectl 0
15 TestDownloadOnly/v1.27.4/LogsDuration 0.07
17 TestDownloadOnly/v1.28.0-rc.1/json-events 9.38
18 TestDownloadOnly/v1.28.0-rc.1/preload-exists 0
21 TestDownloadOnly/v1.28.0-rc.1/kubectl 0
22 TestDownloadOnly/v1.28.0-rc.1/LogsDuration 0.08
23 TestDownloadOnly/DeleteAll 0.26
24 TestDownloadOnly/DeleteAlwaysSucceeds 0.23
26 TestBinaryMirror 0.37
29 TestAddons/Setup 404.15
38 TestAddons/parallel/Headlamp 11.41
49 TestHyperKitDriverInstallOrUpdate 8.86
52 TestErrorSpam/setup 29.71
53 TestErrorSpam/start 0.36
54 TestErrorSpam/status 0.27
55 TestErrorSpam/pause 0.7
56 TestErrorSpam/unpause 0.65
57 TestErrorSpam/stop 3.24
60 TestFunctional/serial/CopySyncFile 0
61 TestFunctional/serial/StartWithProxy 54.12
62 TestFunctional/serial/AuditLog 0
63 TestFunctional/serial/SoftStart 36.79
64 TestFunctional/serial/KubeContext 0.03
65 TestFunctional/serial/KubectlGetPods 0.04
68 TestFunctional/serial/CacheCmd/cache/add_remote 3.55
69 TestFunctional/serial/CacheCmd/cache/add_local 1.23
70 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.03
71 TestFunctional/serial/CacheCmd/cache/list 0.03
72 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.07
73 TestFunctional/serial/CacheCmd/cache/cache_reload 0.89
74 TestFunctional/serial/CacheCmd/cache/delete 0.07
75 TestFunctional/serial/MinikubeKubectlCmd 0.44
76 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.57
77 TestFunctional/serial/ExtraConfig 37.2
78 TestFunctional/serial/ComponentHealth 0.04
79 TestFunctional/serial/LogsCmd 0.64
80 TestFunctional/serial/LogsFileCmd 0.64
81 TestFunctional/serial/InvalidService 3.88
83 TestFunctional/parallel/ConfigCmd 0.21
84 TestFunctional/parallel/DashboardCmd 8.74
85 TestFunctional/parallel/DryRun 0.22
86 TestFunctional/parallel/InternationalLanguage 0.11
87 TestFunctional/parallel/StatusCmd 0.24
92 TestFunctional/parallel/AddonsCmd 0.12
93 TestFunctional/parallel/PersistentVolumeClaim 24.61
95 TestFunctional/parallel/SSHCmd 0.13
96 TestFunctional/parallel/CpCmd 0.28
98 TestFunctional/parallel/FileSync 0.06
99 TestFunctional/parallel/CertSync 0.41
103 TestFunctional/parallel/NodeLabels 0.05
105 TestFunctional/parallel/NonActiveRuntimeDisabled 0.14
107 TestFunctional/parallel/License 0.19
108 TestFunctional/parallel/Version/short 0.04
109 TestFunctional/parallel/Version/components 0.25
110 TestFunctional/parallel/ImageCommands/ImageListShort 0.08
111 TestFunctional/parallel/ImageCommands/ImageListTable 0.07
112 TestFunctional/parallel/ImageCommands/ImageListJson 0.07
113 TestFunctional/parallel/ImageCommands/ImageListYaml 0.08
114 TestFunctional/parallel/ImageCommands/ImageBuild 1.67
115 TestFunctional/parallel/ImageCommands/Setup 1.55
116 TestFunctional/parallel/DockerEnv/bash 0.38
117 TestFunctional/parallel/UpdateContextCmd/no_changes 0.05
118 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.05
119 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.05
120 TestFunctional/parallel/ServiceCmd/DeployApp 12.1
121 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 2.18
122 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.54
123 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 3.51
124 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.51
125 TestFunctional/parallel/ImageCommands/ImageRemove 0.15
126 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.59
127 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.61
130 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
132 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.13
133 TestFunctional/parallel/ServiceCmd/List 0.14
134 TestFunctional/parallel/ServiceCmd/JSONOutput 0.09
135 TestFunctional/parallel/ServiceCmd/HTTPS 0.1
136 TestFunctional/parallel/ServiceCmd/Format 0.1
137 TestFunctional/parallel/ServiceCmd/URL 0.1
138 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.07
139 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
140 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.02
141 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.02
142 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
143 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
144 TestFunctional/parallel/ProfileCmd/profile_not_create 0.18
145 TestFunctional/parallel/ProfileCmd/profile_list 0.14
146 TestFunctional/parallel/ProfileCmd/profile_json_output 0.15
147 TestFunctional/parallel/MountCmd/any-port 5.2
148 TestFunctional/parallel/MountCmd/specific-port 0.94
150 TestFunctional/delete_addon-resizer_images 0.16
151 TestFunctional/delete_my-image_image 0.04
152 TestFunctional/delete_minikube_cached_images 0.04
156 TestImageBuild/serial/Setup 29.58
157 TestImageBuild/serial/NormalBuild 1.07
159 TestImageBuild/serial/BuildWithDockerIgnore 0.16
160 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.1
163 TestIngressAddonLegacy/StartLegacyK8sCluster 65.85
165 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 13.85
166 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.23
170 TestJSONOutput/start/Command 73.03
171 TestJSONOutput/start/Audit 0
173 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
174 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
176 TestJSONOutput/pause/Command 0.31
177 TestJSONOutput/pause/Audit 0
179 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
180 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
182 TestJSONOutput/unpause/Command 0.23
183 TestJSONOutput/unpause/Audit 0
185 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
186 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
188 TestJSONOutput/stop/Command 9.07
189 TestJSONOutput/stop/Audit 0
191 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
193 TestErrorJSONOutput 0.33
198 TestMainNoArgs 0.03
199 TestMinikubeProfile 60.97
255 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
259 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
260 TestNoKubernetes/serial/ProfileList 0.15
261 TestNoKubernetes/serial/Stop 0.06
263 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.04
277 TestStartStop/group/old-k8s-version/serial/Stop 0.06
278 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.09
292 TestStartStop/group/no-preload/serial/Stop 0.06
293 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.09
297 TestStartStop/group/embed-certs/serial/Stop 0.06
298 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.09
314 TestStartStop/group/default-k8s-diff-port/serial/Stop 0.06
315 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.09
317 TestStartStop/group/newest-cni/serial/DeployApp 0
318 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
319 TestStartStop/group/newest-cni/serial/Stop 0.06
320 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.09
326 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
327 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-670000
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-670000: exit status 85 (92.530833ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-670000 | jenkins | v1.31.2 | 21 Aug 23 03:33 PDT |          |
	|         | -p download-only-670000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/21 03:33:15
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.20.7 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0821 03:33:15.084599    1364 out.go:296] Setting OutFile to fd 1 ...
	I0821 03:33:15.084734    1364 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 03:33:15.084737    1364 out.go:309] Setting ErrFile to fd 2...
	I0821 03:33:15.084739    1364 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 03:33:15.084854    1364 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17102-920/.minikube/bin
	W0821 03:33:15.084911    1364 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/17102-920/.minikube/config/config.json: open /Users/jenkins/minikube-integration/17102-920/.minikube/config/config.json: no such file or directory
	I0821 03:33:15.085985    1364 out.go:303] Setting JSON to true
	I0821 03:33:15.102645    1364 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":169,"bootTime":1692613826,"procs":399,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0821 03:33:15.102723    1364 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0821 03:33:15.109779    1364 out.go:97] [download-only-670000] minikube v1.31.2 on Darwin 13.5 (arm64)
	I0821 03:33:15.113932    1364 out.go:169] MINIKUBE_LOCATION=17102
	W0821 03:33:15.109940    1364 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/17102-920/.minikube/cache/preloaded-tarball: no such file or directory
	I0821 03:33:15.109949    1364 notify.go:220] Checking for updates...
	I0821 03:33:15.122864    1364 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/17102-920/kubeconfig
	I0821 03:33:15.126942    1364 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0821 03:33:15.128266    1364 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0821 03:33:15.130953    1364 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/17102-920/.minikube
	W0821 03:33:15.136890    1364 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0821 03:33:15.137064    1364 driver.go:373] Setting default libvirt URI to qemu:///system
	I0821 03:33:15.141874    1364 out.go:97] Using the qemu2 driver based on user configuration
	I0821 03:33:15.141883    1364 start.go:298] selected driver: qemu2
	I0821 03:33:15.141885    1364 start.go:902] validating driver "qemu2" against <nil>
	I0821 03:33:15.141949    1364 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0821 03:33:15.145922    1364 out.go:169] Automatically selected the socket_vmnet network
	I0821 03:33:15.152485    1364 start_flags.go:382] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0821 03:33:15.152630    1364 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0821 03:33:15.152687    1364 cni.go:84] Creating CNI manager for ""
	I0821 03:33:15.152703    1364 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0821 03:33:15.152709    1364 start_flags.go:319] config:
	{Name:download-only-670000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-670000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: N
etworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0821 03:33:15.158251    1364 iso.go:125] acquiring lock: {Name:mk813ea611542195bb0511881888be3fabc72ff6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 03:33:15.161924    1364 out.go:97] Downloading VM boot image ...
	I0821 03:33:15.161950    1364 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/17102-920/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso
	E0821 03:33:15.323126    1364 iso.go:90] Unable to download https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso: getter: &{Ctx:context.Background Src:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso.sha256 Dst:/Users/jenkins/minikube-integration/17102-920/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso.download Pwd: Mode:2 Umask:---------- Detectors:[0x1046845b8 0x1046845b8 0x1046845b8 0x1046845b8 0x1046845b8 0x1046845b8 0x1046845b8] Decompressors:map[bz2:0x1400058de18 gz:0x1400058de70 tar:0x1400058de20 tar.bz2:0x1400058de30 tar.gz:0x1400058de40 tar.xz:0x1400058de50 tar.zst:0x1400058de60 tbz2:0x1400058de30 tgz:0x1400058de40 txz:0x1400058de50 tzst:0x1400058de60 xz:0x1400058de78 zip:0x1400058de80 zst:0x1400058de90] Getters:map[file:0x14000ff1c30 http:0x14000dcd8b0 https:0x14000dcd900] Dir:false ProgressListener:<nil> Insecure:false Dis
ableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	I0821 03:33:15.323189    1364 iso.go:125] acquiring lock: {Name:mk813ea611542195bb0511881888be3fabc72ff6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 03:33:15.328691    1364 out.go:97] Downloading VM boot image ...
	I0821 03:33:15.328780    1364 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.31.0/minikube-v1.31.0-arm64.iso?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.31.0/minikube-v1.31.0-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/17102-920/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso
	I0821 03:33:22.835102    1364 out.go:97] Starting control plane node download-only-670000 in cluster download-only-670000
	I0821 03:33:22.835130    1364 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0821 03:33:22.892327    1364 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0821 03:33:22.892399    1364 cache.go:57] Caching tarball of preloaded images
	I0821 03:33:22.892582    1364 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0821 03:33:22.897647    1364 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0821 03:33:22.897654    1364 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0821 03:33:22.975485    1364 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4?checksum=md5:a000baffb0664b293d602f95ed25caa6 -> /Users/jenkins/minikube-integration/17102-920/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0821 03:33:27.948828    1364 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0821 03:33:27.948974    1364 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17102-920/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0821 03:33:28.589788    1364 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0821 03:33:28.589982    1364 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/download-only-670000/config.json ...
	I0821 03:33:28.590000    1364 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/download-only-670000/config.json: {Name:mk3f18ac86e426c28be79e36d4316c065cb7c923 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 03:33:28.590247    1364 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0821 03:33:28.590424    1364 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 -> /Users/jenkins/minikube-integration/17102-920/.minikube/cache/darwin/arm64/v1.16.0/kubectl
	I0821 03:33:28.905100    1364 out.go:169] 
	W0821 03:33:28.909303    1364 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 Dst:/Users/jenkins/minikube-integration/17102-920/.minikube/cache/darwin/arm64/v1.16.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1046845b8 0x1046845b8 0x1046845b8 0x1046845b8 0x1046845b8 0x1046845b8 0x1046845b8] Decompressors:map[bz2:0x1400058de18 gz:0x1400058de70 tar:0x1400058de20 tar.bz2:0x1400058de30 tar.gz:0x1400058de40 tar.xz:0x1400058de50 tar.zst:0x1400058de60 tbz2:0x1400058de30 tgz:0x1400058de40 txz:0x1400058de50 tzst:0x1400058de60 xz:0x1400058de78 zip:0x1400058de80 zst:0x1400058de90] Getters:map[file:0x14000f4c600 http:0x14000144460 https:0x14000144500] Dir:false ProgressListener:
<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0821 03:33:28.909332    1364 out_reason.go:110] 
	W0821 03:33:28.916086    1364 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0821 03:33:28.919157    1364 out.go:169] 
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-670000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.4/json-events (8.69s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.4/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-670000 --force --alsologtostderr --kubernetes-version=v1.27.4 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-670000 --force --alsologtostderr --kubernetes-version=v1.27.4 --container-runtime=docker --driver=qemu2 : (8.687558458s)
--- PASS: TestDownloadOnly/v1.27.4/json-events (8.69s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.4/preload-exists
--- PASS: TestDownloadOnly/v1.27.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.4/kubectl
--- PASS: TestDownloadOnly/v1.27.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.4/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.4/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-670000
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-670000: exit status 85 (73.512166ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-670000 | jenkins | v1.31.2 | 21 Aug 23 03:33 PDT |          |
	|         | -p download-only-670000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-670000 | jenkins | v1.31.2 | 21 Aug 23 03:33 PDT |          |
	|         | -p download-only-670000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.27.4   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/21 03:33:29
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.20.7 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0821 03:33:29.102423    1376 out.go:296] Setting OutFile to fd 1 ...
	I0821 03:33:29.102543    1376 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 03:33:29.102545    1376 out.go:309] Setting ErrFile to fd 2...
	I0821 03:33:29.102548    1376 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 03:33:29.102660    1376 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17102-920/.minikube/bin
	W0821 03:33:29.102724    1376 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/17102-920/.minikube/config/config.json: open /Users/jenkins/minikube-integration/17102-920/.minikube/config/config.json: no such file or directory
	I0821 03:33:29.103606    1376 out.go:303] Setting JSON to true
	I0821 03:33:29.118529    1376 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":183,"bootTime":1692613826,"procs":392,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0821 03:33:29.118585    1376 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0821 03:33:29.123254    1376 out.go:97] [download-only-670000] minikube v1.31.2 on Darwin 13.5 (arm64)
	I0821 03:33:29.127076    1376 out.go:169] MINIKUBE_LOCATION=17102
	I0821 03:33:29.123381    1376 notify.go:220] Checking for updates...
	I0821 03:33:29.134214    1376 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/17102-920/kubeconfig
	I0821 03:33:29.137203    1376 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0821 03:33:29.140202    1376 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0821 03:33:29.143212    1376 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/17102-920/.minikube
	W0821 03:33:29.148171    1376 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0821 03:33:29.148436    1376 config.go:182] Loaded profile config "download-only-670000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W0821 03:33:29.148465    1376 start.go:810] api.Load failed for download-only-670000: filestore "download-only-670000": Docker machine "download-only-670000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0821 03:33:29.148514    1376 driver.go:373] Setting default libvirt URI to qemu:///system
	W0821 03:33:29.148529    1376 start.go:810] api.Load failed for download-only-670000: filestore "download-only-670000": Docker machine "download-only-670000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0821 03:33:29.151148    1376 out.go:97] Using the qemu2 driver based on existing profile
	I0821 03:33:29.151154    1376 start.go:298] selected driver: qemu2
	I0821 03:33:29.151156    1376 start.go:902] validating driver "qemu2" against &{Name:download-only-670000 KeepContext:false EmbedCerts:false MinikubeISO:https://github.com/kubernetes/minikube/releases/download/v1.31.0/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0
ClusterName:download-only-670000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fal
se DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0821 03:33:29.152971    1376 cni.go:84] Creating CNI manager for ""
	I0821 03:33:29.152984    1376 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0821 03:33:29.152991    1376 start_flags.go:319] config:
	{Name:download-only-670000 KeepContext:false EmbedCerts:false MinikubeISO:https://github.com/kubernetes/minikube/releases/download/v1.31.0/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:download-only-670000 Namespace:default APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/sock
et_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0821 03:33:29.156739    1376 iso.go:125] acquiring lock: {Name:mk813ea611542195bb0511881888be3fabc72ff6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 03:33:29.159200    1376 out.go:97] Starting control plane node download-only-670000 in cluster download-only-670000
	I0821 03:33:29.159206    1376 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime docker
	I0821 03:33:29.213162    1376 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.27.4/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4
	I0821 03:33:29.213182    1376 cache.go:57] Caching tarball of preloaded images
	I0821 03:33:29.213338    1376 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime docker
	I0821 03:33:29.218400    1376 out.go:97] Downloading Kubernetes v1.27.4 preload ...
	I0821 03:33:29.218407    1376 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4 ...
	I0821 03:33:29.293200    1376 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.27.4/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4?checksum=md5:883217b4c813700d926caf1a3f55f0b8 -> /Users/jenkins/minikube-integration/17102-920/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4
	I0821 03:33:33.723196    1376 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4 ...
	I0821 03:33:33.723332    1376 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17102-920/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4 ...
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-670000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.27.4/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0-rc.1/json-events (9.38s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0-rc.1/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-670000 --force --alsologtostderr --kubernetes-version=v1.28.0-rc.1 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-670000 --force --alsologtostderr --kubernetes-version=v1.28.0-rc.1 --container-runtime=docker --driver=qemu2 : (9.376213708s)
--- PASS: TestDownloadOnly/v1.28.0-rc.1/json-events (9.38s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0-rc.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0-rc.1/preload-exists
--- PASS: TestDownloadOnly/v1.28.0-rc.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0-rc.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0-rc.1/kubectl
--- PASS: TestDownloadOnly/v1.28.0-rc.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0-rc.1/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0-rc.1/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-670000
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-670000: exit status 85 (77.8775ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only           | download-only-670000 | jenkins | v1.31.2 | 21 Aug 23 03:33 PDT |          |
	|         | -p download-only-670000           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |          |
	|         | --container-runtime=docker        |                      |         |         |                     |          |
	|         | --driver=qemu2                    |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-670000 | jenkins | v1.31.2 | 21 Aug 23 03:33 PDT |          |
	|         | -p download-only-670000           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.27.4      |                      |         |         |                     |          |
	|         | --container-runtime=docker        |                      |         |         |                     |          |
	|         | --driver=qemu2                    |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-670000 | jenkins | v1.31.2 | 21 Aug 23 03:33 PDT |          |
	|         | -p download-only-670000           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.0-rc.1 |                      |         |         |                     |          |
	|         | --container-runtime=docker        |                      |         |         |                     |          |
	|         | --driver=qemu2                    |                      |         |         |                     |          |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/21 03:33:37
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.20.7 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0821 03:33:37.864447    1385 out.go:296] Setting OutFile to fd 1 ...
	I0821 03:33:37.864545    1385 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 03:33:37.864548    1385 out.go:309] Setting ErrFile to fd 2...
	I0821 03:33:37.864550    1385 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 03:33:37.864651    1385 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17102-920/.minikube/bin
	W0821 03:33:37.864707    1385 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/17102-920/.minikube/config/config.json: open /Users/jenkins/minikube-integration/17102-920/.minikube/config/config.json: no such file or directory
	I0821 03:33:37.865544    1385 out.go:303] Setting JSON to true
	I0821 03:33:37.880640    1385 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":191,"bootTime":1692613826,"procs":391,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0821 03:33:37.880692    1385 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0821 03:33:37.885648    1385 out.go:97] [download-only-670000] minikube v1.31.2 on Darwin 13.5 (arm64)
	I0821 03:33:37.885702    1385 notify.go:220] Checking for updates...
	I0821 03:33:37.889622    1385 out.go:169] MINIKUBE_LOCATION=17102
	I0821 03:33:37.892688    1385 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/17102-920/kubeconfig
	I0821 03:33:37.895584    1385 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0821 03:33:37.898674    1385 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0821 03:33:37.901672    1385 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/17102-920/.minikube
	W0821 03:33:37.907658    1385 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0821 03:33:37.907944    1385 config.go:182] Loaded profile config "download-only-670000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	W0821 03:33:37.907964    1385 start.go:810] api.Load failed for download-only-670000: filestore "download-only-670000": Docker machine "download-only-670000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0821 03:33:37.908008    1385 driver.go:373] Setting default libvirt URI to qemu:///system
	W0821 03:33:37.908020    1385 start.go:810] api.Load failed for download-only-670000: filestore "download-only-670000": Docker machine "download-only-670000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0821 03:33:37.911583    1385 out.go:97] Using the qemu2 driver based on existing profile
	I0821 03:33:37.911591    1385 start.go:298] selected driver: qemu2
	I0821 03:33:37.911593    1385 start.go:902] validating driver "qemu2" against &{Name:download-only-670000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterN
ame:download-only-670000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0821 03:33:37.913506    1385 cni.go:84] Creating CNI manager for ""
	I0821 03:33:37.913518    1385 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0821 03:33:37.913529    1385 start_flags.go:319] config:
	{Name:download-only-670000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0-rc.1 ClusterName:download-only-670000 Namespace:default APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_
vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0821 03:33:37.917312    1385 iso.go:125] acquiring lock: {Name:mk813ea611542195bb0511881888be3fabc72ff6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 03:33:37.920611    1385 out.go:97] Starting control plane node download-only-670000 in cluster download-only-670000
	I0821 03:33:37.920620    1385 preload.go:132] Checking if preload exists for k8s version v1.28.0-rc.1 and runtime docker
	I0821 03:33:37.977628    1385 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0-rc.1/preloaded-images-k8s-v18-v1.28.0-rc.1-docker-overlay2-arm64.tar.lz4
	I0821 03:33:37.977645    1385 cache.go:57] Caching tarball of preloaded images
	I0821 03:33:37.977810    1385 preload.go:132] Checking if preload exists for k8s version v1.28.0-rc.1 and runtime docker
	I0821 03:33:37.982083    1385 out.go:97] Downloading Kubernetes v1.28.0-rc.1 preload ...
	I0821 03:33:37.982089    1385 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.0-rc.1-docker-overlay2-arm64.tar.lz4 ...
	I0821 03:33:38.057442    1385 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0-rc.1/preloaded-images-k8s-v18-v1.28.0-rc.1-docker-overlay2-arm64.tar.lz4?checksum=md5:e2c3bdfb5f48b43f6c053807f7e73462 -> /Users/jenkins/minikube-integration/17102-920/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-rc.1-docker-overlay2-arm64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-670000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0-rc.1/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.26s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:187: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.26s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-670000
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.23s)

                                                
                                    
x
+
TestBinaryMirror (0.37s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-462000 --alsologtostderr --binary-mirror http://127.0.0.1:49329 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-462000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-462000
--- PASS: TestBinaryMirror (0.37s)

                                                
                                    
x
+
TestAddons/Setup (404.15s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:88: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-500000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:88: (dbg) Done: out/minikube-darwin-arm64 start -p addons-500000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=qemu2  --addons=ingress --addons=ingress-dns: (6m44.145982875s)
--- PASS: TestAddons/Setup (404.15s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (11.41s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:800: (dbg) Run:  out/minikube-darwin-arm64 addons enable headlamp -p addons-500000 --alsologtostderr -v=1
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5c78f74d8d-llcss" [eadedc67-c7c0-4100-b508-c6e015e959bb] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5c78f74d8d-llcss" [eadedc67-c7c0-4100-b508-c6e015e959bb] Running
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.012219375s
--- PASS: TestAddons/parallel/Headlamp (11.41s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (8.86s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (8.86s)

                                                
                                    
x
+
TestErrorSpam/setup (29.71s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-904000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-904000 --driver=qemu2 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -p nospam-904000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-904000 --driver=qemu2 : (29.712701708s)
--- PASS: TestErrorSpam/setup (29.71s)

                                                
                                    
x
+
TestErrorSpam/start (0.36s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-904000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-904000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-904000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-904000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-904000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-904000 start --dry-run
--- PASS: TestErrorSpam/start (0.36s)

                                                
                                    
x
+
TestErrorSpam/status (0.27s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-904000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-904000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-904000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-904000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-904000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-904000 status
--- PASS: TestErrorSpam/status (0.27s)

                                                
                                    
x
+
TestErrorSpam/pause (0.7s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-904000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-904000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-904000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-904000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-904000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-904000 pause
--- PASS: TestErrorSpam/pause (0.70s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.65s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-904000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-904000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-904000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-904000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-904000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-904000 unpause
--- PASS: TestErrorSpam/unpause (0.65s)

                                                
                                    
x
+
TestErrorSpam/stop (3.24s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-904000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-904000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-904000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-904000 stop: (3.068034042s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-904000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-904000 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-904000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-904000 stop
--- PASS: TestErrorSpam/stop (3.24s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/17102-920/.minikube/files/etc/test/nested/copy/1362/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (54.12s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-818000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2230: (dbg) Done: out/minikube-darwin-arm64 start -p functional-818000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : (54.114484542s)
--- PASS: TestFunctional/serial/StartWithProxy (54.12s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (36.79s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-818000 --alsologtostderr -v=8
E0821 04:15:32.560954    1362 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/client.crt: no such file or directory
E0821 04:15:32.569383    1362 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/client.crt: no such file or directory
E0821 04:15:32.581454    1362 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/client.crt: no such file or directory
E0821 04:15:32.602254    1362 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/client.crt: no such file or directory
E0821 04:15:32.644355    1362 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/client.crt: no such file or directory
E0821 04:15:32.726480    1362 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/client.crt: no such file or directory
E0821 04:15:32.888855    1362 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/client.crt: no such file or directory
E0821 04:15:33.211136    1362 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/client.crt: no such file or directory
E0821 04:15:33.853540    1362 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/client.crt: no such file or directory
E0821 04:15:35.135702    1362 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/client.crt: no such file or directory
E0821 04:15:37.697823    1362 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/client.crt: no such file or directory
E0821 04:15:42.819946    1362 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/client.crt: no such file or directory
E0821 04:15:53.062270    1362 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-darwin-arm64 start -p functional-818000 --alsologtostderr -v=8: (36.78945525s)
functional_test.go:659: soft start took 36.789906792s for "functional-818000" cluster.
--- PASS: TestFunctional/serial/SoftStart (36.79s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.03s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-818000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.55s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-818000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-818000 cache add registry.k8s.io/pause:3.1: (1.278654708s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-818000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-818000 cache add registry.k8s.io/pause:3.3: (1.164190083s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-818000 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-818000 cache add registry.k8s.io/pause:latest: (1.107060834s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.55s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-818000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialCacheCmdcacheadd_local1854703023/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-arm64 -p functional-818000 cache add minikube-local-cache-test:functional-818000
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-arm64 -p functional-818000 cache delete minikube-local-cache-test:functional-818000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-818000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-arm64 -p functional-818000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.89s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-arm64 -p functional-818000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-arm64 -p functional-818000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-818000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (65.060708ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-arm64 -p functional-818000 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-arm64 -p functional-818000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (0.89s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.44s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-arm64 -p functional-818000 kubectl -- --context functional-818000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.44s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.57s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-818000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.57s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (37.2s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-818000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0821 04:16:13.544482    1362 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-darwin-arm64 start -p functional-818000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (37.197745375s)
functional_test.go:757: restart took 37.197870459s for "functional-818000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (37.20s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-818000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.64s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-arm64 -p functional-818000 logs
--- PASS: TestFunctional/serial/LogsCmd (0.64s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.64s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-arm64 -p functional-818000 logs --file /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialLogsFileCmd1700733233/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.64s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.88s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-818000 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-darwin-arm64 service invalid-svc -p functional-818000
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-darwin-arm64 service invalid-svc -p functional-818000: exit status 115 (139.933875ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.105.4:31746 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-818000 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.88s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-818000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-818000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-818000 config get cpus: exit status 14 (29.533584ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-818000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-818000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-818000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-818000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-818000 config get cpus: exit status 14 (28.473583ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (8.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-818000 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-818000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 3300: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (8.74s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-818000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-818000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (112.215875ms)

                                                
                                                
-- stdout --
	* [functional-818000] minikube v1.31.2 on Darwin 13.5 (arm64)
	  - MINIKUBE_LOCATION=17102
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17102-920/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17102-920/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0821 04:17:37.744708    3283 out.go:296] Setting OutFile to fd 1 ...
	I0821 04:17:37.744828    3283 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 04:17:37.744831    3283 out.go:309] Setting ErrFile to fd 2...
	I0821 04:17:37.744833    3283 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 04:17:37.744943    3283 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17102-920/.minikube/bin
	I0821 04:17:37.745977    3283 out.go:303] Setting JSON to false
	I0821 04:17:37.762800    3283 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2831,"bootTime":1692613826,"procs":413,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0821 04:17:37.762895    3283 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0821 04:17:37.767076    3283 out.go:177] * [functional-818000] minikube v1.31.2 on Darwin 13.5 (arm64)
	I0821 04:17:37.774939    3283 out.go:177]   - MINIKUBE_LOCATION=17102
	I0821 04:17:37.778938    3283 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17102-920/kubeconfig
	I0821 04:17:37.774959    3283 notify.go:220] Checking for updates...
	I0821 04:17:37.785924    3283 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0821 04:17:37.788818    3283 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0821 04:17:37.792362    3283 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17102-920/.minikube
	I0821 04:17:37.794934    3283 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0821 04:17:37.798123    3283 config.go:182] Loaded profile config "functional-818000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0821 04:17:37.798368    3283 driver.go:373] Setting default libvirt URI to qemu:///system
	I0821 04:17:37.802863    3283 out.go:177] * Using the qemu2 driver based on existing profile
	I0821 04:17:37.808871    3283 start.go:298] selected driver: qemu2
	I0821 04:17:37.808876    3283 start.go:902] validating driver "qemu2" against &{Name:functional-818000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName
:functional-818000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0821 04:17:37.808931    3283 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0821 04:17:37.814896    3283 out.go:177] 
	W0821 04:17:37.818928    3283 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0821 04:17:37.821895    3283 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-818000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-818000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-818000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (107.770083ms)

                                                
                                                
-- stdout --
	* [functional-818000] minikube v1.31.2 sur Darwin 13.5 (arm64)
	  - MINIKUBE_LOCATION=17102
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17102-920/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17102-920/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0821 04:17:37.960594    3294 out.go:296] Setting OutFile to fd 1 ...
	I0821 04:17:37.960698    3294 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 04:17:37.960702    3294 out.go:309] Setting ErrFile to fd 2...
	I0821 04:17:37.960704    3294 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 04:17:37.960830    3294 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17102-920/.minikube/bin
	I0821 04:17:37.962273    3294 out.go:303] Setting JSON to false
	I0821 04:17:37.978750    3294 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2831,"bootTime":1692613826,"procs":413,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0821 04:17:37.978848    3294 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0821 04:17:37.983894    3294 out.go:177] * [functional-818000] minikube v1.31.2 sur Darwin 13.5 (arm64)
	I0821 04:17:37.989982    3294 out.go:177]   - MINIKUBE_LOCATION=17102
	I0821 04:17:37.990064    3294 notify.go:220] Checking for updates...
	I0821 04:17:37.996958    3294 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17102-920/kubeconfig
	I0821 04:17:37.999980    3294 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0821 04:17:38.002889    3294 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0821 04:17:38.005952    3294 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17102-920/.minikube
	I0821 04:17:38.008805    3294 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0821 04:17:38.012172    3294 config.go:182] Loaded profile config "functional-818000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0821 04:17:38.012406    3294 driver.go:373] Setting default libvirt URI to qemu:///system
	I0821 04:17:38.016894    3294 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0821 04:17:38.023935    3294 start.go:298] selected driver: qemu2
	I0821 04:17:38.023940    3294 start.go:902] validating driver "qemu2" against &{Name:functional-818000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName
:functional-818000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0821 04:17:38.023987    3294 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0821 04:17:38.029898    3294 out.go:177] 
	W0821 04:17:38.033876    3294 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0821 04:17:38.037877    3294 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-arm64 -p functional-818000 status
functional_test.go:856: (dbg) Run:  out/minikube-darwin-arm64 -p functional-818000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-darwin-arm64 -p functional-818000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-darwin-arm64 -p functional-818000 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-darwin-arm64 -p functional-818000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (24.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [497a18a3-4473-413e-bf26-83b0fdbae4cf] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.013938125s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-818000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-818000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-818000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-818000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [33ca22c1-51f1-46f3-8525-cc89e4597371] Pending
helpers_test.go:344: "sp-pod" [33ca22c1-51f1-46f3-8525-cc89e4597371] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [33ca22c1-51f1-46f3-8525-cc89e4597371] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.018712625s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-818000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-818000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-818000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [2bddb7f1-948e-4fd5-8568-ca9ccb51f806] Pending
helpers_test.go:344: "sp-pod" [2bddb7f1-948e-4fd5-8568-ca9ccb51f806] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [2bddb7f1-948e-4fd5-8568-ca9ccb51f806] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.01254725s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-818000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (24.61s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-darwin-arm64 -p functional-818000 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-darwin-arm64 -p functional-818000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-818000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-818000 ssh -n functional-818000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-818000 cp functional-818000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd366448724/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-818000 ssh -n functional-818000 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/1362/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-arm64 -p functional-818000 ssh "sudo cat /etc/test/nested/copy/1362/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/1362.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-818000 ssh "sudo cat /etc/ssl/certs/1362.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/1362.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-818000 ssh "sudo cat /usr/share/ca-certificates/1362.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-818000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/13622.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-818000 ssh "sudo cat /etc/ssl/certs/13622.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/13622.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-818000 ssh "sudo cat /usr/share/ca-certificates/13622.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-818000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-818000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-arm64 -p functional-818000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-818000 ssh "sudo systemctl is-active crio": exit status 1 (141.550083ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-arm64 -p functional-818000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-arm64 -p functional-818000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-818000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-818000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.27.4
registry.k8s.io/kube-proxy:v1.27.4
registry.k8s.io/kube-controller-manager:v1.27.4
registry.k8s.io/kube-apiserver:v1.27.4
registry.k8s.io/etcd:3.5.7-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-818000
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-818000
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-818000 image ls --format short --alsologtostderr:
I0821 04:17:43.319079    3316 out.go:296] Setting OutFile to fd 1 ...
I0821 04:17:43.319453    3316 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0821 04:17:43.319458    3316 out.go:309] Setting ErrFile to fd 2...
I0821 04:17:43.319461    3316 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0821 04:17:43.319599    3316 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17102-920/.minikube/bin
I0821 04:17:43.320011    3316 config.go:182] Loaded profile config "functional-818000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
I0821 04:17:43.320074    3316 config.go:182] Loaded profile config "functional-818000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
I0821 04:17:43.320874    3316 ssh_runner.go:195] Run: systemctl --version
I0821 04:17:43.320884    3316 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/functional-818000/id_rsa Username:docker}
I0821 04:17:43.346717    3316 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-818000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-818000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/library/minikube-local-cache-test | functional-818000 | f2d2fad9f828c | 30B    |
| docker.io/library/nginx                     | alpine            | 397432849901d | 43.4MB |
| registry.k8s.io/kube-proxy                  | v1.27.4           | 532e5a30e948f | 66.5MB |
| registry.k8s.io/coredns/coredns             | v1.10.1           | 97e04611ad434 | 51.4MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| docker.io/library/nginx                     | latest            | ab73c7fd67234 | 192MB  |
| registry.k8s.io/kube-controller-manager     | v1.27.4           | 389f6f052cf83 | 107MB  |
| registry.k8s.io/kube-scheduler              | v1.27.4           | 6eb63895cb67f | 56.2MB |
| registry.k8s.io/etcd                        | 3.5.7-0           | 24bc64e911039 | 181MB  |
| registry.k8s.io/pause                       | 3.9               | 829e9de338bd5 | 514kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| gcr.io/google-containers/addon-resizer      | functional-818000 | ffd4cfbbe753e | 32.9MB |
| registry.k8s.io/kube-apiserver              | v1.27.4           | 64aece92d6bde | 115MB  |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-818000 image ls --format table --alsologtostderr:
I0821 04:17:43.542810    3322 out.go:296] Setting OutFile to fd 1 ...
I0821 04:17:43.542955    3322 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0821 04:17:43.542958    3322 out.go:309] Setting ErrFile to fd 2...
I0821 04:17:43.542960    3322 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0821 04:17:43.543087    3322 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17102-920/.minikube/bin
I0821 04:17:43.543505    3322 config.go:182] Loaded profile config "functional-818000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
I0821 04:17:43.543571    3322 config.go:182] Loaded profile config "functional-818000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
I0821 04:17:43.544381    3322 ssh_runner.go:195] Run: systemctl --version
I0821 04:17:43.544392    3322 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/functional-818000/id_rsa Username:docker}
I0821 04:17:43.571079    3322 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-818000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-818000 image ls --format json --alsologtostderr:
[{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"64aece92d6bde5b472d8185fcd2d5ab1add8814923a26561821f7cab5e819388","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.27.4"],"size":"115000000"},{"id":"97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"51400000"},{"id":"24bc64e911039ecf00e263be2161797c758b7d82403ca5516ab64047a477f737","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.7-0"],"size":"181000000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"6eb63895cb67fce76da3ed6eaaa865ff55e7c761c9e6a691a83855ff0987a085","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.27.4"],"size":"56200000"},{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2
ff72b43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"514000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"f2d2fad9f828cd5cb401fc5ce7a74d9e893d6ec4a63c01c4c992d17a46b71cf1","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-818000"],"size":"30"},{"id":"397432849901d4b78b8fda5db7d50e074ac273977a4a78ce47ad069d4a15e091","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"43400000"},{"id":"389f6f052cf83156f82a2bbbf6ea2c24292d246b58900d91f6a1707eacf510b2","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.27.4"],"size":"107000000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-818000"],"size":"32900000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDiges
ts":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"ab73c7fd672341e41ec600081253d0b99ea31d0c1acdfb46a1485004472da7ac","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"192000000"},{"id":"532e5a30e948f1c084333316b13e68fbeff8df667f3830b082005127a6d86317","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.27.4"],"size":"66500000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-818000 image ls --format json --alsologtostderr:
I0821 04:17:43.471252    3320 out.go:296] Setting OutFile to fd 1 ...
I0821 04:17:43.471389    3320 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0821 04:17:43.471394    3320 out.go:309] Setting ErrFile to fd 2...
I0821 04:17:43.471396    3320 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0821 04:17:43.471511    3320 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17102-920/.minikube/bin
I0821 04:17:43.471957    3320 config.go:182] Loaded profile config "functional-818000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
I0821 04:17:43.472017    3320 config.go:182] Loaded profile config "functional-818000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
I0821 04:17:43.472897    3320 ssh_runner.go:195] Run: systemctl --version
I0821 04:17:43.472907    3320 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/functional-818000/id_rsa Username:docker}
I0821 04:17:43.499222    3320 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-818000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-818000 image ls --format yaml --alsologtostderr:
- id: 397432849901d4b78b8fda5db7d50e074ac273977a4a78ce47ad069d4a15e091
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "43400000"
- id: ab73c7fd672341e41ec600081253d0b99ea31d0c1acdfb46a1485004472da7ac
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "192000000"
- id: 532e5a30e948f1c084333316b13e68fbeff8df667f3830b082005127a6d86317
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.27.4
size: "66500000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-818000
size: "32900000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: 64aece92d6bde5b472d8185fcd2d5ab1add8814923a26561821f7cab5e819388
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.27.4
size: "115000000"
- id: 389f6f052cf83156f82a2bbbf6ea2c24292d246b58900d91f6a1707eacf510b2
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.27.4
size: "107000000"
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "514000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: f2d2fad9f828cd5cb401fc5ce7a74d9e893d6ec4a63c01c4c992d17a46b71cf1
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-818000
size: "30"
- id: 6eb63895cb67fce76da3ed6eaaa865ff55e7c761c9e6a691a83855ff0987a085
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.27.4
size: "56200000"
- id: 97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "51400000"
- id: 24bc64e911039ecf00e263be2161797c758b7d82403ca5516ab64047a477f737
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.7-0
size: "181000000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-818000 image ls --format yaml --alsologtostderr:
I0821 04:17:43.395229    3318 out.go:296] Setting OutFile to fd 1 ...
I0821 04:17:43.395798    3318 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0821 04:17:43.395810    3318 out.go:309] Setting ErrFile to fd 2...
I0821 04:17:43.395817    3318 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0821 04:17:43.396244    3318 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17102-920/.minikube/bin
I0821 04:17:43.396970    3318 config.go:182] Loaded profile config "functional-818000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
I0821 04:17:43.397037    3318 config.go:182] Loaded profile config "functional-818000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
I0821 04:17:43.397889    3318 ssh_runner.go:195] Run: systemctl --version
I0821 04:17:43.397903    3318 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/functional-818000/id_rsa Username:docker}
I0821 04:17:43.423897    3318 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (1.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-arm64 -p functional-818000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-818000 ssh pgrep buildkitd: exit status 1 (58.179792ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 -p functional-818000 image build -t localhost/my-image:functional-818000 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-darwin-arm64 -p functional-818000 image build -t localhost/my-image:functional-818000 testdata/build --alsologtostderr: (1.541007959s)
functional_test.go:319: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-818000 image build -t localhost/my-image:functional-818000 testdata/build --alsologtostderr:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
a01966dde7f8: Pulling fs layer
a01966dde7f8: Verifying Checksum
a01966dde7f8: Download complete
a01966dde7f8: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> 71a676dd070f
Step 2/3 : RUN true
---> Running in a0fdf63ea6d9
Removing intermediate container a0fdf63ea6d9
---> ab0d7bcae589
Step 3/3 : ADD content.txt /
---> d44225c91644
Successfully built d44225c91644
Successfully tagged localhost/my-image:functional-818000
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-818000 image build -t localhost/my-image:functional-818000 testdata/build --alsologtostderr:
I0821 04:17:43.672601    3326 out.go:296] Setting OutFile to fd 1 ...
I0821 04:17:43.672773    3326 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0821 04:17:43.672776    3326 out.go:309] Setting ErrFile to fd 2...
I0821 04:17:43.672778    3326 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0821 04:17:43.672903    3326 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17102-920/.minikube/bin
I0821 04:17:43.673246    3326 config.go:182] Loaded profile config "functional-818000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
I0821 04:17:43.673611    3326 config.go:182] Loaded profile config "functional-818000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
I0821 04:17:43.674358    3326 ssh_runner.go:195] Run: systemctl --version
I0821 04:17:43.674368    3326 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17102-920/.minikube/machines/functional-818000/id_rsa Username:docker}
I0821 04:17:43.700320    3326 build_images.go:151] Building image from path: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.2884088296.tar
I0821 04:17:43.700394    3326 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0821 04:17:43.703197    3326 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2884088296.tar
I0821 04:17:43.704548    3326 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2884088296.tar: stat -c "%s %y" /var/lib/minikube/build/build.2884088296.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2884088296.tar': No such file or directory
I0821 04:17:43.704576    3326 ssh_runner.go:362] scp /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.2884088296.tar --> /var/lib/minikube/build/build.2884088296.tar (3072 bytes)
I0821 04:17:43.711874    3326 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2884088296
I0821 04:17:43.714731    3326 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2884088296 -xf /var/lib/minikube/build/build.2884088296.tar
I0821 04:17:43.717693    3326 docker.go:339] Building image: /var/lib/minikube/build/build.2884088296
I0821 04:17:43.717744    3326 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-818000 /var/lib/minikube/build/build.2884088296
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
I0821 04:17:45.170110    3326 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-818000 /var/lib/minikube/build/build.2884088296: (1.45236525s)
I0821 04:17:45.170181    3326 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2884088296
I0821 04:17:45.176476    3326 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2884088296.tar
I0821 04:17:45.180452    3326 build_images.go:207] Built localhost/my-image:functional-818000 from /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.2884088296.tar
I0821 04:17:45.180469    3326 build_images.go:123] succeeded building to: functional-818000
I0821 04:17:45.180471    3326 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-818000 image ls
2023/08/21 04:17:46 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (1.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.510175417s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-818000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.55s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-818000 docker-env) && out/minikube-darwin-arm64 status -p functional-818000"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-818000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-818000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-818000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-818000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (12.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1436: (dbg) Run:  kubectl --context functional-818000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-818000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-7b684b55f9-w49wx" [0409257d-b782-41cf-8fff-a8ed59b258e2] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-7b684b55f9-w49wx" [0409257d-b782-41cf-8fff-a8ed59b258e2] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 12.017773917s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (12.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-arm64 -p functional-818000 image load --daemon gcr.io/google-containers/addon-resizer:functional-818000 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-darwin-arm64 -p functional-818000 image load --daemon gcr.io/google-containers/addon-resizer:functional-818000 --alsologtostderr: (2.108126125s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-818000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-arm64 -p functional-818000 image load --daemon gcr.io/google-containers/addon-resizer:functional-818000 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-darwin-arm64 -p functional-818000 image load --daemon gcr.io/google-containers/addon-resizer:functional-818000 --alsologtostderr: (1.468827667s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-818000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (3.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.433724792s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-818000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-arm64 -p functional-818000 image load --daemon gcr.io/google-containers/addon-resizer:functional-818000 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-darwin-arm64 -p functional-818000 image load --daemon gcr.io/google-containers/addon-resizer:functional-818000 --alsologtostderr: (1.959793042s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-818000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (3.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-arm64 -p functional-818000 image save gcr.io/google-containers/addon-resizer:functional-818000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-arm64 -p functional-818000 image rm gcr.io/google-containers/addon-resizer:functional-818000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-818000 image ls
E0821 04:16:54.505207    1362 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-arm64 -p functional-818000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-818000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-818000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-arm64 -p functional-818000 image save --daemon gcr.io/google-containers/addon-resizer:functional-818000 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-818000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-818000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-818000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [b75c7c5f-16a2-4002-81c2-fb2037fa5063] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [b75c7c5f-16a2-4002-81c2-fb2037fa5063] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.014320333s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.13s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-darwin-arm64 -p functional-818000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-darwin-arm64 -p functional-818000 service list -o json
functional_test.go:1493: Took "90.944708ms" to run "out/minikube-darwin-arm64 -p functional-818000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-darwin-arm64 -p functional-818000 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.105.4:32705
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-darwin-arm64 -p functional-818000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-darwin-arm64 -p functional-818000 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.105.4:32705
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-818000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.104.144.38 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-818000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1314: Took "109.084292ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1328: Took "31.831ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1365: Took "112.221125ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1378: Took "33.530958ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (5.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-818000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port203537942/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1692616643984220000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port203537942/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1692616643984220000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port203537942/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1692616643984220000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port203537942/001/test-1692616643984220000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-818000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-818000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (58.794625ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-818000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-arm64 -p functional-818000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug 21 11:17 created-by-test
-rw-r--r-- 1 docker docker 24 Aug 21 11:17 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug 21 11:17 test-1692616643984220000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-arm64 -p functional-818000 ssh cat /mount-9p/test-1692616643984220000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-818000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [bcb23c58-d07e-4a92-9c9b-c45c9b914521] Pending
helpers_test.go:344: "busybox-mount" [bcb23c58-d07e-4a92-9c9b-c45c9b914521] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [bcb23c58-d07e-4a92-9c9b-c45c9b914521] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [bcb23c58-d07e-4a92-9c9b-c45c9b914521] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.009958541s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-818000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-818000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-818000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-818000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-818000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port203537942/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (5.20s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (0.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-818000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port3205394019/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-818000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-818000 ssh "findmnt -T /mount-9p | grep 9p": exit status 80 (66.953959ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get machine status: state: connect: dial unix /Users/jenkins/minikube-integration/17102-920/.minikube/machines/functional-818000/monitor: connect: connection refused
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_mount_3781a175160808ffb81a5a9799c94970c713b69d_0.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-818000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-arm64 -p functional-818000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-818000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port3205394019/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-818000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-818000 ssh "sudo umount -f /mount-9p": exit status 1 (63.354583ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-818000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-818000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port3205394019/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (0.94s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.16s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-818000
--- PASS: TestFunctional/delete_addon-resizer_images (0.16s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-818000
--- PASS: TestFunctional/delete_my-image_image (0.04s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-818000
--- PASS: TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (29.58s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-925000 --driver=qemu2 
E0821 04:18:16.426709    1362 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/client.crt: no such file or directory
image_test.go:69: (dbg) Done: out/minikube-darwin-arm64 start -p image-925000 --driver=qemu2 : (29.576453875s)
--- PASS: TestImageBuild/serial/Setup (29.58s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.07s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-925000
image_test.go:78: (dbg) Done: out/minikube-darwin-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-925000: (1.066101458s)
--- PASS: TestImageBuild/serial/NormalBuild (1.07s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.16s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-925000
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.16s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.1s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-925000
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.10s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (65.85s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-darwin-arm64 start -p ingress-addon-legacy-717000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=qemu2 
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-darwin-arm64 start -p ingress-addon-legacy-717000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=qemu2 : (1m5.850577209s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (65.85s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (13.85s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-717000 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-darwin-arm64 -p ingress-addon-legacy-717000 addons enable ingress --alsologtostderr -v=5: (13.847539208s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (13.85s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.23s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-717000 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.23s)

                                                
                                    
x
+
TestJSONOutput/start/Command (73.03s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-370000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
E0821 04:20:32.557916    1362 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/client.crt: no such file or directory
E0821 04:21:00.267465    1362 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/addons-500000/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 start -p json-output-370000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : (1m13.024918083s)
--- PASS: TestJSONOutput/start/Command (73.03s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.31s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-370000 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.31s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.23s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-370000 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.23s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (9.07s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-370000 --output=json --user=testUser
E0821 04:21:46.105801    1362 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/functional-818000/client.crt: no such file or directory
E0821 04:21:46.112215    1362 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/functional-818000/client.crt: no such file or directory
E0821 04:21:46.124380    1362 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/functional-818000/client.crt: no such file or directory
E0821 04:21:46.146540    1362 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/functional-818000/client.crt: no such file or directory
E0821 04:21:46.188705    1362 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/functional-818000/client.crt: no such file or directory
E0821 04:21:46.270810    1362 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/functional-818000/client.crt: no such file or directory
E0821 04:21:46.432957    1362 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/functional-818000/client.crt: no such file or directory
E0821 04:21:46.755195    1362 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/functional-818000/client.crt: no such file or directory
E0821 04:21:47.397562    1362 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/functional-818000/client.crt: no such file or directory
E0821 04:21:48.679959    1362 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/functional-818000/client.crt: no such file or directory
E0821 04:21:51.242227    1362 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/functional-818000/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-370000 --output=json --user=testUser: (9.073527833s)
--- PASS: TestJSONOutput/stop/Command (9.07s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.33s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-874000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-874000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (91.697916ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"32341aee-6142-45c3-8c75-062de945cfa9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-874000] minikube v1.31.2 on Darwin 13.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"9914e716-2602-4f4d-8f48-9bfa2e16e474","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17102"}}
	{"specversion":"1.0","id":"916f8201-42dd-4c83-a0b0-19457529f523","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/17102-920/kubeconfig"}}
	{"specversion":"1.0","id":"5d6abf95-8c2c-4e05-ba66-62ff775793d5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"26022a1e-86a8-4d11-a315-ad84bfa25bcc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"5c1ad953-3057-4872-97d1-8661c5f928f9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/17102-920/.minikube"}}
	{"specversion":"1.0","id":"9c8e16ec-82c3-4e17-81cc-d480e5695ded","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"51139c63-950c-4f57-a54a-e020013cd12a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-874000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-874000
--- PASS: TestErrorJSONOutput (0.33s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestMinikubeProfile (60.97s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-586000 --driver=qemu2 
E0821 04:21:56.364722    1362 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/functional-818000/client.crt: no such file or directory
E0821 04:22:06.607233    1362 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/functional-818000/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-arm64 start -p first-586000 --driver=qemu2 : (29.507022667s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p second-587000 --driver=qemu2 
E0821 04:22:27.089436    1362 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17102-920/.minikube/profiles/functional-818000/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-arm64 start -p second-587000 --driver=qemu2 : (30.699128583s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 profile first-586000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 profile second-587000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-587000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-587000
helpers_test.go:175: Cleaning up "first-586000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-586000
--- PASS: TestMinikubeProfile (60.97s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-809000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-809000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (100.61975ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-809000] minikube v1.31.2 on Darwin 13.5 (arm64)
	  - MINIKUBE_LOCATION=17102
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17102-920/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17102-920/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-809000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-809000 "sudo systemctl is-active --quiet service kubelet": exit status 89 (42.129875ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p NoKubernetes-809000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.15s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.15s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-809000
--- PASS: TestNoKubernetes/serial/Stop (0.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-809000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-809000 "sudo systemctl is-active --quiet service kubelet": exit status 89 (41.289666ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p NoKubernetes-809000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-137000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-137000 -n old-k8s-version-137000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-137000 -n old-k8s-version-137000: exit status 7 (28.832167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-137000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-776000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/no-preload/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-776000 -n no-preload-776000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-776000 -n no-preload-776000: exit status 7 (27.678208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-776000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-644000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/embed-certs/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-644000 -n embed-certs-644000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-644000 -n embed-certs-644000: exit status 7 (28.600417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-644000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-202000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-202000 -n default-k8s-diff-port-202000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-202000 -n default-k8s-diff-port-202000: exit status 7 (28.16125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-202000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-600000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-600000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/newest-cni/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-600000 -n newest-cni-600000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-600000 -n newest-cni-600000: exit status 7 (27.9505ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-600000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (25/261)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.4/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.27.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.4/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.27.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0-rc.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0-rc.1/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0-rc.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0-rc.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0-rc.1/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0-rc.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:210: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:420: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:474: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (12.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-818000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup178691038/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-818000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup178691038/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-818000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup178691038/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-818000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-818000 ssh "findmnt -T" /mount1: exit status 1 (62.165916ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-818000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-818000 ssh "findmnt -T" /mount1: exit status 1 (85.517458ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-818000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-818000 ssh "findmnt -T" /mount1: exit status 1 (68.002667ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-818000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-818000 ssh "findmnt -T" /mount1: exit status 1 (63.574792ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-818000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-818000 ssh "findmnt -T" /mount1: exit status 1 (87.377792ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-818000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-818000 ssh "findmnt -T" /mount1: exit status 1 (60.573542ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-818000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-818000 ssh "findmnt -T" /mount1: exit status 1 (58.913458ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:340: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-818000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup178691038/001:/mount1 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-818000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup178691038/001:/mount2 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-818000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup178691038/001:/mount3 --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/VerifyCleanup (12.73s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:296: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:522: 
----------------------- debugLogs start: cilium-797000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-797000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-797000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-797000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-797000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-797000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-797000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-797000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-797000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-797000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-797000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-797000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-797000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-797000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-797000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-797000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-797000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-797000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-797000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-797000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-797000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-797000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-797000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-797000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-797000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-797000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-797000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-797000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-797000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-797000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-797000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-797000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-797000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-797000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-797000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-797000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-797000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-797000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-797000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-797000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-797000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-797000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-797000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-797000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-797000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-797000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-797000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-797000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-797000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-797000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-797000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-797000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-797000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-797000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-797000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-797000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-797000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-797000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-797000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-797000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-797000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-797000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-797000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-797000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-797000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797000"

                                                
                                                
----------------------- debugLogs end: cilium-797000 [took: 2.076772416s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-797000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-797000
--- SKIP: TestNetworkPlugins/group/cilium (2.31s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-246000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-246000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.24s)

                                                
                                    
Copied to clipboard